March 14, 2024 12:37 PM

Gene Expression

Someone sent me this image, from a slide deck they ran across somewhere:

A slide labeled 'Gene Expression'. The main image a casual shot of actor Gene Wilder, labeled 'One Gene'. There a three side images of Wilder as iconic characters he played in 'Willy Wonka & the Chocolate Factory', 'Young Frankenstein', and 'Blazing Saddles'. There are arrows from the main image to the three side images, labeled 'Many Outcomes'.

I don't know what to do with it other than to say this:

As a person named 'Eugene' and an admirer of Mr. Wilder's work, I smile every time I see it. That's a clever way to reinforce the idea of gene expression by analogy, using actors and roles.

When I teach OOP and FP, I'm always looking for simple analogies like this from the non-programming world to reinforce ideas that we are learning about in class. My OOP repertoire is pretty deep. As I teach functional programming each spring, I'm still looking for new FP analogies all the time.

~~~~~

Note: I don't know the original source of this image. If you know who created the slide, please let me know via email, Mastodon, or Twitter (all linked in the sidebar). I would love to credit the creator.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 29, 2024 3:45 PM

Finding the Torture You're Comfortable With

At some point last week, I found myself pointed to this short YouTube video of Jerry Seinfeld talking with Howard Stern about work habits. Seinfeld told Stern that he was essentially always thinking about making comedy. Whatever situation he found himself in, even with family and friends, he was thinking about how he could mine it for new material. Stern told him that sounded like torture. Jerry said, yes, it was, but...

Your blessing in life is when you find the torture you're comfortable with.

This is something I talk about with students a lot.

Sometimes it's a current student who is worried that CS isn't for them because too often the work seems hard, or boring. Shouldn't it be easy, or at least fun?

Sometimes it's a prospective student, maybe a HS student on a university visit or a college student thinking about changing their major. They worry that they haven't found an area of study that makes them happy all the time. Other people tell them, "If you love what you do, you'll never work a day in your life." Why can't I find that?

I tell them all that I love what I do -- studying, teaching, and writing about computer science -- and even so, some days feel like work.

I don't use torture as analogy the way Seinfeld does, but I certainly know what he means. Instead, I usually think of this phenomenon in terms of drudgery: all the grunt work that comes with setting up tools, and fiddling with test cases, and formatting documentation, and ... the list goes on. Sometimes we can automate one bit of drudgery, but around the corner awaits another.

And yet we persist. We have found the drudgery we are comfortable with, the grunt work we are willing to do so that we can be part of the thing it serves: creating something new, or understanding one little corner of the world better.

I experienced the disconnect between the torture I was comfortable with and the torture that drove me away during my first year in college. As I've mentioned here a few times, most recently in my post on Niklaus Wirth, from an early age I had wanted to become an architect (the kind who design houses and other buildings, not software). I spent years reading about architecture and learning about the profession. I even took two drafting courses in high school, including one in which we designed a house and did a full set of plans, with cross-sections of walls and eaves.

Then I got to college and found two things. One, I still liked architecture in the same way as I always had. Two, I most assuredly did not enjoy the kind of grunt work that architecture students had to do, nor did I relish the torture that came with not seeing a path to a solution for a thorny design problem.

That was so different from the feeling I had writing BASIC programs. I would gladly bang my head on the wall for hours to get the tiniest detail just the way I wanted it, either in the code or in the output. When the torture ended, the resulting program made all the pain worth it. Then I'd tackle a new problem, and it started again.

Many of the students I talk with don't yet know this feeling. Even so, it comforts some of them to know that they don't have to find The One Perfect Major that makes all their boredom go away.

However, a few others understand immediately. They are often the ones who learned to play a musical instrument or who ran cross country. The pianists remember all the boring finger exercises they had to do; the runners remember all the wind sprints and all the long, boring miles they ran to build their base. These students stuck with the boredom and worked through the pain because they wanted to get to the other side, where satisfaction and joy are.

Like Seinfeld, I am lucky that I found the torture I am comfortable with. It has made this life a good one. I hope everyone finds theirs.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Running, Software Development, Teaching and Learning

February 09, 2024 3:45 PM

Finding Cool Ideas to Play With

In a recent post on Computational Complexity, Bill Gasarch wrote up the solution to a fun little dice problem he had posed previously. Check it out. After showing the solution, he answered some meta-questions. I liked this one:

How did I find this question, and its answer, at random? I intentionally went to the math library, turned my cell phone off, and browsed some back issues of the journal Discrete Mathematics. I would read the table of contents and decide what article sounded interesting, read enough to see if I really wanted to read that article. I then SAT DOWN AND READ THE ARTICLES, taking some notes on them.

He points out that turning off his cell phone isn't the secret to his method.

It's allowing yourself the freedom to NOT work on a a paper for the next ... conference and just read math for FUN without thinking in terms of writing a paper.

Slack of this sort used to be one of the great attractions of the academic life. I'm not sure it is as much a part of the deal as it once was. The pace of the university seems faster these days. Many of the younger faculty I follow out in the world seem always to be hustling for the next conference acceptance or grant proposal. They seem truly joyous when an afternoon turns into a serendipitous session of debugging or reading.

Gasarch's advice is wise, if you can follow it: Set aside time to explore, and then do it.

It's not always easy fun; reading some articles is work. But that's the kind of fun many of us signed up for when we went into academia.

~~~~~

I haven't made enough time to explore recently, but I did get to re-read an old paper unexpectedly. A student came to me to discuss possible undergrad research projects. He had recently been noodling around, implementing his own neural network simulator. I've never been much of a neural net person, but that reminded of this paper on PushForth, a concatenative language in the spirit of Forth and Joy designed as part of an evolutionary programming project. Genetic programming has always interested me, and concatenative languages seem like a perfect fit...

I found the paper in a research folder and made time to re-read it for fun. This is not the kind of fun Gasarch is talking about, as it had potential use for a project, but I enjoyed digging into the topic again nonetheless.

The student looked at the paper and liked the idea, too, so we embarked on a little project -- not quite serendipity, but a project I hadn't planned to work on at the turn of the new year. I'll take it!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

January 21, 2024 8:28 AM

A Few Thoughts on How Criticism Affects People

The same idea popped up in three settings this week: a conversation with a colleague about student assessments, a book I am reading about women writers, and a blog post I read on the exercise bike one morning.

The blog post is by Ben Orlin at Math With Bad Drawings from a few months ago, about an occasional topic of this blog: being less wrong each day [ for example, 1 and 2 ]. This sentence hit close enough to home that I saved it for later.

We struggle to tolerate censure, even the censure of idiots. Our social instrument is strung so tight, the least disturbance leaves us resonating for days.

Perhaps this struck a chord because I'm currently reading A Room of One's Own, by Virginia Woolf. In one early chapter, Woolf considers the many reasons that few women wrote poetry, fiction, or even non-fiction before the 19th century. One is that they had so little time and energy free to do so. Another is that they didn't have space to work alone, a room of one's own. But even women who had those things had to face a third obstacle: criticism from men and women alike that women couldn't, or shouldn't, write.

Why not shrug off the criticism and soldier on? Woolf discusses just how hard that is for anyone to do. Even many of our greatest writers, including Tennyson and Keats, obsessed over every unkind word said about them or their work. Woolf concludes:

Literature is strewn with the wreckage of men who have minded beyond reason the opinions of others.

Orlin's post, titled Err, and err, and err again; but less, and less, and less, makes an analogy between the advance of scientific knowledge and an infinite series in mathematics. Any finite sum in the series is "wrong", but if we add one more term, it is less wrong than the previous sum. Every new term takes us closer to the perfect answer.

a black and white portrait of a bearded man
Source: Wikipedia, public domain

He then goes on to wonder whether the same is, or could be, true of our moral development. His inspiration is American psychologist and philosopher William James. I have mentioned James as an inspiration myself a few times in this blog, most explicitly in Pragmatism and the Scientific Spirit, where I quote him as saying that consciousness is "not a thing or a place, but a process".

Orlin connects his passage on how humans receive criticism to James's personal practice of trying to listen only to the judgment of ever more noble critics, even if we have to imagine them into being:

"All progress in the social Self," James says, "is the substitution of higher tribunals for lower."

If we hold ourselves to a higher, more noble standard, we can grow. When we reach the next plateau, we look for the next higher standard to shoot for. This is an optimistic strategy for living life: we are always imperfect, but we aspire to grow in knowledge and moral development by becoming a little less wrong each step of the way. To do so, we try to focus our attention on the opinions of those whose standard draws us higher.

Reading James almost always leaves my spirit lighter. After Orlin's post, I feel a need to read The Principles of Psychology in full.

These two threads on how people respond to criticism came together when I chatted with a colleague this week about criticism from students. Each semester, we receive student assessments of our courses, which include multiple-choice ratings as well as written comments. The numbers can be a jolt, but their effect is nothing like that of the written comments. Invariably, at least one student writes a negative response, often an unkind or ungenerous one.

I told my colleague that this is recurring theme for almost every faculty member I have known: Twenty-nine students can say "this was a good course, and I really like the professor", but when one student writes something negative... that is the only comment we can think about.

The one bitter student in your assessments is probably not the ever more noble critic that James encourages you to focus on. But, yeah. Professors, like all people, are strung pretty tight when it comes to censure.

Fortunately, talking to others about the experience seems to help. And it may also remind us to be aware of how students respond to the things we say and do.

Anyway, I recommend both the Orlin blog post and Woolf's A Room of One's Own. The former is a quick read. The latter is a bit longer but a smooth read. Woolf writes well, and once my mind got on the book's wavelength, I found myself engaged deeply in her argument.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 06, 2024 10:41 AM

end.

a man in a suit, behind a microphone and a bottle of water
Source: Wikipedia, unrestricted

My social media feed this week has included many notes and tributes on the passing of Niklaus Wirth, including his obituary from ETH Zurich, where he was a professor. Wirth was, of course, a Turing Award winner for his foundational work designing a sequence of programming languages.

Wirth's death reminded me of END DO, my post on the passing of John Backus, and before that a post on the passing of Kenneth Iverson. I have many fond memories related to Wirth as well.

Pascal

Pascal was, I think, the fifth programming language I learned. After that, my language-learning history starts to speed up and blur. (I do think APL and Lisp came soon after.)

I learned BASIC first, as a junior in high school. This ultimately changed the trajectory of my life, as it planted the seeds for me to abandon a lifelong dream to be an architect.

Then at university, I learned Fortran in CS 1, PL/I in Data Structures (you want pointers!), and IBM 360/370 assembly language in a two-quarter sequence that also included JCL. Each of these language expanded my mind a little.

Pascal was the first language I learned "on my own". The fall of my junior year, I took my first course in algorithms. On Day 1, the professor announced that the department had decided to switch to Pascal in the intro course, so that's what we would use in this course.

"Um, prof, that's what the new CS majors are learning. We know Fortran and PL/I." He smiled, shrugged, and turned to the chalkboard. Class began.

After class, several of us headed immediately to the university library, checked out one Pascal book each, and headed back to the dorms to read. Later that week, we were all using Pascal to implement whatever classical algorithm we learned first in that course. Everything was fine.

I've always treasured that experience, even if it was little scary for a week or so. And don't worry: That professor turned out to be a good guy with whom I took several courses. He was a fellow chess player and ended up being the advisor on my senior project: a program to perform the Swiss system commonly used to run chess tournaments. I wrote that program in... Pascal. Up to that point, it was the largest and most complex program I had ever written solo. I still have the code.

The first course I taught as a tenure-track prof was my university's version of CS 1 -- using Pascal.

Fond memories all. I miss the language.

Wirth sightings in this blog

I did a quick search and found that Wirth has made an occasional appearance in this blog over the years.

• January 2006: Just a Course in Compilers

This was written at the beginning of my second offering of our compiler course, which I have taught and written about many times since. I had considered using as our textbook Wirth's Compiler Construction, a thin volume that builds a compiler for a subset of Wirth's Oberon programming language over the course of sixteen short chapters. It's a "just the facts and code" approach that appeals to me most days.

I didn't adopt the book for several reasons, not least of which that at the time Amazon showed only four copies available, starting at $274.70 each. With two decades of experience teaching the course now, I don't think I could ever really use this book with my undergrads, but it was a fun exercise for me to work through. It helped me think about compilers and my course.

Note: A PDF of Compiler Construction has been posted on the web for many years, but every time I link to it, the link ultimately disappears. I decided to mirror the files locally, so that the link will last as long as this post lasts:
[ Chapters 1-8 | Chapters 9-16 ]

• September 2007: Hype, or Disseminating Results?

... in which I quote Wirth's thoughts on why Pascal spread widely in the world but Modula and Oberon didn't. The passage comes from a short historical paper he wrote called "Pascal and its Successors". It's worth a read.

• April 2012: Intermediate Representations and Life Beyond the Compiler

This post mentions how Wirth's P-code IR ultimately lived on in the MIPS compiler suite long after the compiler which first implemented P-code.

• July 2016: Oberon: GoogleMaps as Desktop UI

... which notes that the Oberon spec defines the language's desktop as "an infinitely large two-dimensional space on which windows ... can be arranged".

• November 2017: Thousand-Year Software

This is my last post mentioning Wirth before today's. It refers to the same 1999 SIGPLAN Notices article that tells the P-code story discussed in my April 2012 post.

I repeat myself. Some stories remain evergreen in my mind.

The Title of This Post

I titled my post on the passing of John Backus END DO in homage to his intimate connection to Fortran. I wanted to do something similar for Wirth.

Pascal has a distinguished sequence to end a program: "end.". It seems a fitting way to remember the life of the person who created it and who gave the world so many programming experiences.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

December 31, 2023 1:35 PM

"I Want to Find Something to Learn That Excites Me"

In his year-end wrap-up, Greg Wilson writes:

I want to find something to learn that excites me. A new musical instrument is out because of my hand; I've thought about reviving my French, picking up some Spanish, diving into Postgres or machine learningn (yeah, yeah, I know, don't hate me), but none of them are making my heart race.

What he said. I want to find something to learn that excites me.

I just spent six months immersed in learning more about HTML, CSS, and JavaScript so that I could work with novice web developers. Picking up that project was one part personal choice and one part professional necessity. It worked out well. I really enjoyed studying the web development world and learned some powerful new tools. I will continue to use them as time and energy permit.

But I can't say that I am excited enough by the topic to keep going in this area. Right now, I am still burned out from the semester on a learning treadmill. I have a followup post to my early reactions about the course's JavaScript unit in the hopper, waiting for a little desire to finish it.

What now? There are parallels between my state and Wilson's.

  • After my first-ever trip to Europe in 2019, for a Dagstuhl seminar (brief mention here), my wife and I talked about a return trip, with a focus this time on Italy. Learning Italian was part of the nascent plan. Then came COVID, along with a loss of energy for travel. I still have learning Italian in my mind.
  • In the fall of 2020, the first full semester of the pandemic, I taught a database course for the first time (bookend posts here and here). I still have a few SQL projects and learning goals hanging around from that time, but none are calling me right now.
  • LLMs are the main focus of so many people's attention these days, but they still haven't lit up me up. In some ways, I envy David Humphrey, who fell in love with AI this year. Maybe something about LLMs will light me up one of these days. (As always, you should read David's stuff. He does neat work and shares it with the world.)

Unlike Wilson, I do not play a musical instrument. I did, however, learn a little basic piano twenty-five years ago when I was a Suzuki piano parent with my daughters. We still have our piano, and I harbor dreams of picking it back up and going farther some day. Right now doesn't seem to be that day.

I have several other possibilities on the back burner, particularly in the area of data analytics. I've been intrigued by the work on data-centric computing in education being done by Kathi Fisler and Shriram Krishnamurthi have been at Brown. I also will be reading a couple of their papers on program design and plan composition in the coming weeks as I prepare for my programming languages course this spring. Fisler and Krishnamurthi are coming at these topics from the side of CS education, but the topics are also related to my grad-school work in AI. Maybe these papers will ignite a spark.

Winter break is coming to an end soon. Like others, I'm thinking about 2024. Let's see what the coming weeks bring.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

November 24, 2023 12:17 PM

And Then Came JavaScript

It's been a long time again between posts. That seems to be the new normal. On top of that, though, teaching web development this fall for the first time has been soaking up all of my free hours in the evenings and over the weekends, which has left me little time to be sad about not blogging. I've certainly been writing plenty of text and a bit of code.

The course started off with a lot of positive energy. The students were excited to learn HTML and CSS. I made a few mistakes in how I organized and presented topics, but things went pretty well. By all accounts, students seemed to enjoy what they were learning and doing.

And then came JavaScript.

Well, along came real computer programming. It could have been Python or some other language, I think, but the transition to writing programs, even short bits of code, took the wind out of the students' excitement.

I was prepared for the possibility that the mood of the course would change when we shifted from CSS to JavaScript. A previous offering of the course had encountered a similar obstacle. Learning to program is a challenge. I'm still not convinced that learning to program is that much harder than a lot of things people learn to do, but it does take time.

As I prepared for the course last summer, I saw so many different approaches to teaching JavaScript for web development. Many assumed a lot of HTML/CSS/DOM background, certainly more than my students had picked up in six weeks. Others assumed programming experience most of my students didn't have, even when the approach said 'no experience necessary!'. So I had to find a middle path.

a colorful cube with sides labeled HTML, CSS, and JS
Source: MDN, CC BY-SA 2.5

My main source of inspiration in the first half of the course was David Humphrey's WEB 222 course, which was explicitly aimed at an audience of CS students with programming experience. So I knew that I had to do something different with my students, even as I used his wonderful course materials whenever I could.

My department had offered this course once many years ago, aimed at much the same kind of audience as mine, and the instructor — a good friend — shared all of his materials. I used that offering as a primary source of ideas for getting started with JavaScript, and I occasionally adapted examples for use in my class.

The results were not ideal. Students don't seem to have enjoyed this part of the course much at all. Some acknowledged that to me directly. Even the most engaged students seemed to lose a bit of their energy for the course. Performance also sagged. Based on homework solutions and a short exam, I would say that only one student has achieved the outcomes I had originally outlined for this unit.

I either expected too much or did not do a good enough job helping students get to where I wanted them to be.

I have to do better next time.

But how?

Programming isn't as hard as some people tell us, but most of us can't learn to do it in five or six weeks, at least not enough to become very productive. We don't expect students to master all of CSS or even HTML in such a short time, so we can't expect them to master JavaScript either. The difference is that there seems to be a smooth on-ramp for learning HTML and CSS on the way to mastery, while JavaScript (or any other programming language) presents a steep climb, with occasional plateaus.

For now, I am thinking that the key to doing better is to focus on an even narrower set of concepts and skills.

If people starting from scratch can't learn all of JavaScript in five or six weeks, or even enough to be super-productive, what useful skills can they learn in that time? For this course I trimmed down the set of topics that we might cover in an intro CS considerably, but I think I need to trim even more and — more importantly — choose topics and examples that are even more embedded in the act of web development.

Earlier this week, a sudden burst of thought outlined something like this:

  • document.querySelector() to select an element in a page
  • simple assignment statements to modify innerText, innerHTML, and various style attributes
  • parameterizing changes to an element to create a function
  • document.querySelectorAll() to select collections of elements in a page
  • forEach to process every element in a collection
  • guarded actions to select items in the collection using if statements, without else clauses

That is a lot to learn in five weeks! Even so, it cuts way back on several topics I tried cover this time, such as a more general discussion of objects, arrays, and boolean values, and a deeper look at the DOM. And it eliminates even mentioning several topics altogether:

  • if-else statements
  • while statements
  • counted for loops and, more generally, map-like behavior
  • any fiddling with numbers and arithmetic, which are often used to learn assignment statements, if statements, and function

There are so many things a programmer can't do without these concepts, such as writing an indefinite data validation loop or really understanding what's going on in the DOM tree. But trying to cover all of those topics too did not result in students being able to do them either! I think it left them confused, with so many new ideas jumbled in their minds, and a general dissatisfaction at being unable to use JavaScript effectively.

Of course I would want to build hooks into the course for students who want to go deeper and are ready to do so. There is so much good material on the web for people who are ready for more. Providing more enriched opportunities for advanced students is easier than designing learning opportunities for beginners.

Can something like this work?

I won't know for a while. It will be at least a year before I teach this course again. I wish I could teach it again sooner, so that I could try some of my new ideas and get feedback on them sooner. Such is the curse of a university calendar and once-yearly offerings.

It's too late to make any big changes in trajectory this semester. We have only two weeks after the current one-week Thanksgiving break. Next week, we will focus on input forms (HTML), styling (CSS), and a little data validation (HTML+JavaScript). I hope that this return to HTML+CSS helps us end the course on a positive note. I'd like for students to finish with a good feeling about all they have learned and now can do.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 31, 2023 7:12 PM

The Spirit of Spelunking

Last month, Gus Mueller announced v7.4.3 of his image-editing tool Acorn. This release has a fun little extra built in:

One more super geeky thing I've added is a JavaScript Console:

...

This tool is really meant for folks developing plugins in Acorn, and it is only accessible from the Command Bar, but a part of me absolutely loves pointing out little things like this. I was just chatting with Brent Simmons the other day at Xcoders how you can't really spelunk in apps any more because of all the restrictions that are (justifiably) put on recent MacOS releases. While a console isn't exactly a spelunking tool, I still think it's kind of cool and fun and maybe someone will discover it accidentally and that will inspire them to do stupid and entertaining things like we used to do back in the 10.x days.

I have had JavaScript consoles on my mind a lot in the last few weeks. My students and I have used the developer tools in our browsers as part of my web development course for non-majors. To be honest, I had never used a JavaScript console until this summer, when I began preparing for the course in earnest. REPLs are, of course, a big part of the programming background, from Lisp to Racket to Ruby to Python, so I took to the console with ease and joy. (My past experience with JavaScript was mostly in Node.js, which has its own REPL.) We just started our fourth week studying JavaScript in class, so my students have started getting used to the console. At the outset, it was new to most of them, who have never programmed before. Our attention has now turned to interacting with the DOM and manipulating the content of web page. It's been a lot of fun for me. I'm not sure how it feels for all of my students, though. Many came to the course for web design and really enjoy HTML and CSS. JavaScript, on the other hand, is... programming: more syntax, more semantics, and a lot of new details just to select, say, the third h3 on the page.

Sometimes, you just gotta work over the initial hump to sense the power and fun. Some of them are getting there.

Today I had great fun showing them how to add some simple search functionality to a web page. It was our first big exercise using document.querySelectorAll() and processing a collection of HTML elements. Soon we'll learn about text fields and buttons and events, at which point my The Books of Bokonon will become much more useful to the many readers who still find pleasure in it. Just last night that page got its first modern web styling in the form of a CSS style sheet. For its first twenty-four years of existence, it was all 1990s-era HTML and pseudo-layout using <center>, <hr>, and <br> tags.

Anyway, I appreciate Gus's excitement at adding a console to Acorn, making his tool a place to play as well as work. Spread the joy.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 22, 2023 8:38 PM

Time and Words

Earlier this week, I posted an item on Mastodon:

After a few months studying and using CSS, every once in a while I am able to change a style sheet and cause something I want to happen. It feels wonderful. Pretty soon, though, I'll make a typo somewhere, or misunderstand a property, and not be able to make anything useful happen. That makes me feel like I'm drowning in complexity.

Thankfully, the occasional wonderful feelings — and my strange willingness to struggle with programming errors — pull me forward.

I've been learning a lot while preparing to teach to our web development course [ 1 | 2 ]. Occasionally, I feel incompetent, or not very bright. It's good for me.

I haven't been blogging lately, but I've been writing lots of words. I've also been re-purposing and adapting many other words that are licensed to be reusable (thanks, David Humphrey and John Davis). Prepping a new course is usually prime blogging time for me, with my mind in a constant state of churn, but this course has me drowning in work to do. There is so much to learn, and so much new material — readings, session plans and notes, examples, homework assignments — to create.

I have made a few notes along the way, hoping to expand them into posts. Today they become part of this post.

VS Code

This is my first time in a long time using an editor configured to auto-complete and do all the modern things that developers expect these days. I figured, new tool, why not try a new workflow...

After a short period of breaking in, I'm quite enjoying the experience. One feature I didn't expect to use so much is the ability to collapse an HTML element. In a large HTML file, this has been a game changer for me. Yes, I know, this is old news to most of you. But as my brother loves to say when he gets something used or watches a movie everyone else has already seen, "But, hey, it's new to me!" VS Code's auto-complete across HTML, CSS, and JavaScript, with built-in documentation and links to MDN's more complete documentation, lets me type code much faster than ever before. It made me think of one of my favorite Kent Beck lines:

As the speed of development approaches infinity, reuse becomes irrelevant.

When programming, I often copy, paste, and adapt previous code. In VS Code, I have found myself moving fast enough that copy, paste, and adapt would slow me down. That sort of reuse has become irrelevant.

Examples > Prose

The class sessions for which I have written the longest and most complete notes for my students (and me) tend to be the ones for which I have the fewest, or least well-developed, code examples. The reverse is also true: lots of good examples and code tends to mean smaller class notes. Sometimes that is because I run out of time to write much prose to accompany the session. Just as often, though, it's because the examples give us plenty to do live in class, where the learning happens in the flow of writing and examining code.

This confirms something I've observed over many years of teaching: Examples First tends to work better for most students, even people like me who fancy themselves as top-down thinkers. Good examples and code exercises can create the conditions in which abstract knowledge can be learned. This is a sturdy pedagogical pattern.

Concepts and Patterns

There is so, so much to CSS! HTML itself has a lot of details for new coders to master before they reach fluency. Many of the websites aimed at teaching and presenting these topics quickly begin to feel like a downpour of information, even when the authors try to organize it all. It's too easy to fall into, "And then there's this other property...".

After a few weeks, I've settled into trying to help students learn two kinds of things for each topic:

  • a couple of basic concepts or principles
  • a few helpful patterns
My challenge is that I am still new enough to modern web design that identifying either in advance is a challenge. My understanding is constantly evolving, so my examples and notes are evolving, too. I will be better next time, or so I hope.

~~~~~

We are only five weeks into a fifteen week semester, so take any conclusions I draw with a grain of salt. We also haven't gotten to JavaScript yet, the teaching and learning of which will present a different sort of challenge than HTML and CSS with students who have little or no experience programming. Maybe I will make time to write up our experiences with JavaScript in a few weeks.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

July 31, 2023 2:35 PM

Learning CSS By Doing It

I ran across this blog post earlier this summer when browsing articles on CSS, as one does while learning CSS for a fall course. Near the end, the writer says:

This is thoroughly exciting to me, and I don't wanna whine about improvements in CSS, but it's a bit concerning since I feel like what the web is now capable of is slipping through my fingers. And I guess that's what I'm worried about; I no longer have a good idea of how these things interact with each other, or where the frontier is now.

The map of CSS in my mind is real messy, confused, and teetering with details that I can't keep straight in my head.

Imagine how someone feels as they learn CSS basically from the beginning and tries to get a handle both on how to use it effectively and how to teach it effectively. There is so much there... The good news, of course, is that our course is for folks with no experience, learning the basics of HTML, CSS, and JavaScript from the beginning, so there is only so far we can hope to go in fifteen weeks anyway.

My impressions of HTML and CSS at this point are quite similar: very little syntax, at least for big chunks of the use cases, and lots and lots of vocabulary. Having convenient access to documentation such as that available at developer.mozilla.org via the web and inside VS Code makes exploring all of the options more manageable in context.

I've been watching Dave Humphrey's videos for his WEB 222 course at Seneca College and learning tons. Special thanks to Dave for showing me a neat technique to use when learning -- and teaching -- web development: take a page you use all the time and try to recreate it using HTML and CSS, without looking at the page's own styles. He has done that a couple times now in his videos, and I was able to integrate the ideas we covered about the two languages in previous videos as Dave made the magic work. I have tried it once on my own. It's good fun and a challenging exercise.

Learning layout by viewing page source used to be easier in the old days, when pages were simpler and didn't include dozens of CSS imports or thousands of scripts. Accepting the extra challenge of not looking at a page's styles in 2023 is usually the simpler path.

Two re-creations I have on tap for myself in the coming days are a simple Wikipedia-like page for myself (I'm not notable enough to have an actual Wikipedia page, of course) and a page that acts like my Mastodon home page, with anchored sidebars and a scrolling feed in between. Wish me luck.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 10, 2023 12:28 PM

What We Know Affects What We See

Last time I posted this passage from Shop Class as Soulcraft, by Matthew Crawford:

Countless times since that day, a more experienced mechanic has pointed out to me something that was right in front of my face, but which I lacked the knowledge to see. It is an uncanny experience; the raw sensual data reaching my eye before and after are the same, but without the pertinent framework of meaning, the features in question are invisible. Once they have been pointed out, it seems impossible that I should not have seen them before.

We perceive in part based on what we know. A lack of knowledge can prevent us from seeing what is right in front of us. Our brains and eyes work together, and without a perceptual frame, they don't make sense of the pattern. Once we learn something, our eyes -- and brains -- can.

This reminds me of a line from the movie The Santa Clause, which my family watched several times when my daughters were younger. The new Santa Claus is at the North Pole, watching magical things outside his window, and comments to the elf whose been helping him, "I see it, but I don't believe it." She replies that adults don't understand: "Seeing isn't believing; believing is seeing." As a mechanic, Crawford came to understand that knowing is seeing.

Later in the book, Crawford describes another way that knowing and perceiving interact with one another, this time with negative results. He had been struggling to figure out why there was no spark at the spark plugs in his VW Bug, and his father -- an academic, not a mechanic -- told him about Ohm's Law:

Ohm's law is something explicit and rulelike, and is true in the way that propositions are true. Its utter simplicity makes it beautiful; a mind in possession of this equation is charmed with a sense of its own competence. We feel we have access to something universal, and this affords a pleasure that is quasi-religious, perhaps. But this charm of competence can get in the way of noticing things; it can displace, or perhaps hamper the development of, a different kind of knowledge that may be difficult to bring to explicit awareness, but is superior as a practical matter. It superiority lies in the fact that it begins with the typical rather than the universal, so it goes more rapidly and directly to particular causes, the kind that actually tend to cause ignition problems.

Rule-based, universal knowledge imposes a frame on the scene. Unfortunately, its universal nature can impede perception by blinding us to the particular details of the situation we are actually in. Instead of seeing what is there, we see the scene as our knowledge would have it.

the cover of the book Drawing on the Right Side of the Brain

This reminds me of a story and a technique from the book Drawing on the Right Side of the Brain, which I first wrote about in the earliest days of this blog. When asked to draw a chair, most people barely even look at the chair in front of them. Instead, they start drawing their idea of a chair, supplemented by a few details of the actual chair they see. That works about as well as diagnosing an engine by diagnosing your mind's eye of an engine, rather than the mess of parts in front of you.

In that blog post, I reported my experience with one of Edwards's techniques for seeing the chair, drawing the negative space:

One of her exercises asked the student to draw a chair. But, rather than trying to draw the chair itself, the student is to draw the space around the chair. You know, that little area hemmed in between the legs of the chair and the floor; the space between the bottom of the chair's back and its seat; and the space that is the rest of the room around the chair. In focusing on these spaces, I had to actually look at the space, because I don't have an image in my brain of an idealized space between the bottom of the chair's back and its seat. I had to look at the angles, and the shading, and that flaw in the seat fabric that makes the space seem a little ragged.

Sometimes, we have to trick our eyes into seeing, because otherwise our brains tell us what we see before we actually look at the scene. Abstract universal knowledge helps us reason about what we see, but it can also impede us from seeing in the first place.

What we know both enables and hampers what we perceive. This idea has me thinking about how my students this fall, non-CS majors who want to learn how to develop web sites, will encounter the course. Most will be novice programmers who don't know what they see when they are looking at code, or perhaps even at a page rendered in the browser. Debugging code will be a big part of their experience this semester. Are there exercises I can give them to help them see accurately?

As I said in my previous post, there's lots of good stuff happening in my brain as I read this book. Perhaps more posts will follow.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

July 04, 2023 11:55 AM

Time Out

Any man can call time out, but no man
can say how long the time out will be.
-- Books of Bokonon

I realized early last week that it had been a while since I blogged. June was a morass of administrative work, mostly summer orientation. Over the month, I had made notes for several potential posts, on my web dev course, on the latest book I was reading, but never found -- made -- time to write a full post. I figured this would be a light month, only a couple of short posts, if I only I could squeeze another one in by Friday.

Then I saw that the date of my most recent post was May 26, with the request for ideas about the web course coming a week before.

I no longer trust my sense of time.

This blog has certainly become much quieter over the years, due in part to the kind and amount of work I do and in part to choices I make outside of work. I may even have gone a month between posts a few fallow times in the past. But June 2023 became my first calendar month with zero posts.

It's somewhat surprising that a summer month would be the first to shut me out. Summer is a time of no classes to teach, fewer student and faculty issues to deal with, and fewer distinct job duties. This occurrence is a testament to how much orientation occupies many of my summer days, and how at other times I just want to be AFK.

A real post or two are on their way, I promise -- a promise to myself, as well as to any of you who are missing my posts in your newsreader. In the meantime...

On the web dev course: thanks to everyone who sent thoughts! There were a few unanimous, or near unanimous, suggestions, such as to have students use VS code. I am now learning it myself, and getting used to an IDE that autocompletes pairs such as "". My main prep activity up to this point has been watching David Humphrey's videos for WEB 222. I have been learning a little HTML and JavaScript and a lot of CSS and how these tools work together on the modern web. I'm also learning how to teach these topics, while thinking about the differences between my student audience and David's.

On the latest book: I'm currently reading Shop Class as Soulcraft, by Matthew Crawford. It came out in 2010 and, though several people recommended it to me then, I had never gotten around to it. This book is prompting so many ideas and thoughts that I'm constantly jotting down notes and thinking about how these ideas might affect my teaching and my practice as a programmer. I have a few short posts in mind based on the book, if only I commit time to flesh them out. Here are two passages, one short and one long, from my notes.

Fixing things may be a cure for narcissism.

Countless times since that day, a more experienced mechanic has pointed out to me something that was right in front of my face, but which I lacked the knowledge to see. It is an uncanny experience; the raw sensual data reaching my eye before and after are the same, but without the pertinent framework of meaning, the features in question are invisible. Once they have been pointed out, it seems impossible that I should not have seen them before.

Both strike a chord for me as I learn an area I know only the surface of. Learning changes us.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

May 19, 2023 2:57 PM

Help! Teaching Web Development in 2023

With the exception of teaching our database course during the COVID year, I have been teaching a stable pair of courses for the last many semesters: Programming Languages in the spring and our compilers course, Translation of Programming Languages, in the fall. That will change this fall due to some issues with enrollments and course demands. I'll be teaching a course in client-side web development.

The official number and title of the course are "CS 1100 Web Development: Client-Side Coding". The catalog description for the course was written years ago by committee:

Client-side web development adhering to current Web standards. Includes by-hand web page development involving basic HTML, CSS, data acquisition using forms, and JavaScript for data validation and simple web-based tools.

As you might guess from the 1000-level number, this is an introductory course suitable for even first-year students. Learning to use HTML, CSS, and Javascript effectively is the focal point. It was designed as a service course for non-majors, with the primary audience these days being students in our Interactive Digital Studies program. Students in that program learn some HTML and CSS in another course, but that course is not a prerequisite to ours. A few students will come in with a little HTML5+CSS3 experience, but not all.

So that's where I am. As I mentioned, this is one of my first courses to design from scratch in a long time. Other than Database Systems, we have to go back to Software Engineering in 2009. Starting from scratch is fun but can be daunting, especially in a course outside my core competency of hard-core computer science.

The really big change, though, was mentioned two paragraphs ago: non-majors. I don't think I've taught non-majors since teaching my university's capstone 21 years ago -- so long ago that this blog did not yet exist. I haven't taught a non-majors' programming course in even longer, 25 years or more, when I last taught BASIC. That is so long ago that their was no "Visual" in the language name!

So: new area, new content, new target audience. I have a lot of work to do this summer.

I could use some advice from those of you who do web development for a living, who know someone who does, or who are more up to date on the field than I.

Generally, what should a course with this title and catalog description be teaching to beginners in 2023?

Specifically, can you point me toward...

  • similar courses with material online that I could learn from?
  • resources for students to use as they learn: websites, videos, even books?

For example, a former student and now friend mentioned that the w3schools website includes a JavaScript tutorial which allows students to type and test code within the web browser. That might simplify practice greatly for non-CS students while they are learning other development tools.

I have so many questions to answer about tools in particular right now: Should we use an IDE or a simple text editor? Which one? Should we learn raw JavaScript or a simple framework? If a framework, which one?

This isn't a job-training course, but to the extent that's reasonable I would like for students to see a reasonable facsimile of what they might encounter out in industry.

Thinking back, I guess I'm glad now that I decided to play some around with JavaScript in 2022... At least I now have more context for evaluatins the options available for this course.

If you have any thoughts or suggestions, please do send them along. Sending email or replying on Mastodon or Twitter all work. I'll keep you posted on what I learn.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 07, 2023 8:36 AM

"The Society for the Diffusion of Useful Knowledge"

I just started reading Joshua Kendall's The Man Who Made Lists, a story about Peter Mark Roget. Long before compiling his namesake thesaurus, Roget was a medical doctor with a local practice. After a family tragedy, though, he returned to teaching and became a science writer:

In the 1820s and 1830s, Roget would publish three hundred thousand words in the Encyclopaedia Brittanica and also several lengthy review articles for the Society for the Diffusion of Useful Knowledge, the organization affiliated with the new University of London, which sought to enable the British working class to educate itself.

What a noble goal, enabling the working class to educate itself. And what a cool name: The Society for the Diffusion of Useful Knowledge!

For many years, my university has provided a series of talks for retirees, on topics from various departments on campus. This is a fine public service, though without the grand vision -- or the wonderful name -- of the Society for the Diffusion of Useful Knowledge. I suspect that most universities depend too much on tuition and lower costs these days to mount an ambitious effort to enable the working class to educate itself.

Mental illness ran in Roget's family. Kendall wonders if Roget's "lifelong desire to bring order to the world" -- through his lecturing, his writing, and ultimately his thesaurus, which attempted to classify every word and concept -- may have "insulated him from his turbulent emotions" and helped him stave off the depression that afflicted several of his family members.

Academics often live an obsessive connection with the disciplines they practice and study. Certainly that sort of focus can can be bad for a person when taken too far. (Is it possible for an obsession not to go too far?) For me, though, the focus of studying something deeply, organizing its parts, and trying to communicate it to others through my courses and writing has always felt like a gift. The activity has healing properties all its own.

In any case, the name "The Society for the Diffusion of Useful Knowledge" made me smile. Reading has the power to heal, too.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

April 23, 2023 12:09 PM

PyCon Day 2

Another way that attending a virtual conference is like the in-person experience: you can oversleep, or lose track of time. After a long morning of activity before the time-shifted start of Day 2, I took a nap before the 11:45 talk, and...

Talk 1: Python's Syntactic Sugar

Grrr. I missed the first two-thirds of this talk, which I greatly anticipated, but I slept longer than I planned. My body must have needed more than I was giving it.

I saw enough of the talk, though, to know I want to watch the video on YouTube when it shows up. This topic is one of my favorite topics in programming languages: What is the smallest set of features we need to implement the rest of the language? The speaker spent a couple of years implementing various Python features in terms of others, and arrived at a list of only ten that he could not translate away. The rest are sugar. I missed the list at the beginning of the talk, but I gathered a few of its members in the ten minutes I watched: while, raise, and try/except.

I love this kind of exercise: "How can you translate the statement if X: Y into one that uses only core features?" Here's one attempt the speaker gave:

    try:
        while X:
	    Y
	    raise _DONE
    except _DONE:
        None
 

I was today days old when I learned that Python's bool subclasses int, that True == 1, and that False == 0. That bit of knowledge was worth interrupting my nap to catch the end of the talk. Even better, this talk was based on a series of blog posts. Video is okay, but I love to read and play with ideas in written form. This series vaults to the top of my reading list for the coming days.

Talk 2: Subclassing, Composition, Python, and You

Okay, so this guy doesn't like subclasses much. Fair enough, but... some of his concerns seem to be more about the way Python classes work (open borders with their super- and subclasses) than with the idea itself. He showed a lot of ways one can go wrong with arcane uses of Python subclassing, things I've never thought to do with a subclass in my many years doing OO programming. There are plenty of simpler uses of inheritance that are useful and understandable.

Still, I liked this talk, and the speaker. He was honest about his biases, and he clearly cares about programs and programmers. His excitement gave the talk energy. The second half of the talk included a few good design examples, using subclassing and composition together to achieve various ends. It also recommended the book Architecture Patterns with Python. I haven't read a new software patterns book in a while, so I'll give this one a look.

Toward the end, the speaker referenced the line "Software engineering is programming integrated over time." Apparently, this is a Google thing, but it was new to me. Clever. I'll think on it.

Talk 3: How We Are Making CPython Faster -- Past, Present and Future

I did not realize that, until Python 3.11, efforts to make the interpreter had been rather limited. The speaker mentioned one improvement made in 3.7 to optimize the typical method invocation, obj.meth(arg), and one in 3.8 that sped up global variable access by using a cache. There are others, but nothing systematic.

At this point, the talk became mutually recursive with the Friday talk "Inside CPython 3.11's New Specializing, Adaptive Interpreter". The speaker asked us to go watch that talk and return. If I were more ambitious, I'd add a link to that talk now, but I'll let you any of you are interested to visit yesterday's post and scroll down two paragraphs.

He then continued with improvements currently in the works, including:

  • efforts to optimize over larger regions, such as the different elements of a function call
  • use of partial evaluation when possible
  • specialization of code
  • efforts to speed up memory management and garbage collection

He also mentions possible improvements related to C extension code, but I didn't catch the substance of this one. The speaker offered the audience a pithy takeaway from his talk: Python is always getting faster. Do the planet a favor and upgrade to the latest version as soon as you can. That's a nice hook.

There was lots of good stuff here. Whenever I hear compiler talks like this, I almost immediately start thinking about how I might work some of the ideas into my compiler course. To do more with optimization, we would have to move faster through construction of a baseline compiler, skipping some or all of the classic material. That's a trade-off I've been reluctant to make, given the course's role in our curriculum as a compilers-for-everyone experience. I remain tempted, though, and open to a different treatment.

Talk 4: The Lost Art of Diagrams: Making Complex Ideas Easy to See with Python

Early on, this talk contained a line that programmers sometimes need to remember: Good documentation shows respect for users. Good diagrams, said the speaker, can improve users' lives. The talk was a nice general introduction to some of the design choices available to us as we create diagrams, including the use of color, shading, and shapes (Venn diagrams, concentric circles, etc.). It then discussed a few tools one can use to generate better diagrams. The one that appealed most to me was Mermaid.js, which uses a Markdown-like syntax that reminded me of GraphViz's Dot language. My students and use GraphViz, so picking up Mermaid might be a comfortable task.

~~~~~

My second day at virtual PyCon confirmed that attending was a good choice. I've seen enough language-specific material to get me thinking new thoughts about my courses, plus a few other topics to broaden the experience. A nice break from the semester's grind.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

April 09, 2023 8:24 AM

It Was Just Revision

There are several revised approaches to "what's the deal with the ring?" presented in "The History of The Lord of the Rings", and, as you read through the drafts, the material just ... slowly gets better! Bit by bit, the familiar angles emerge. There seems not to have been any magic moment: no electric thought in the bathtub, circa 1931, that sent Tolkien rushing to find a pen.

It was just revision.

Then:

... if Tolkien can find his way to the One Ring in the middle of the fifth draft, so can I, and so can you.

-- Robin Sloan, How The Ring Got Good


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

March 29, 2023 2:39 PM

Fighting Entropy in Class

On Friday, I wrote a note to myself about updating an upcoming class session:

Clean out the old cruft. Simplify, simplify, simplify! I want students to grok the idea and the implementation. All that Lisp and Scheme history is fun for me, but it gets in the students' way.

This is part of an ongoing battle for me. Intellectually, I know to design class sessions and activities focused on where students are and what they need to do in order to learn. Yet it continually happens that I strike upon a good approach for a session, and then over the years I stick a little extra in here and there; within a few iterations I have a big ball of mud. Or I fool myself that some bit of history that I find fascinating is somehow essential for students to learn about, too, so I keep it in a session. Over the years, the history grows more and more distant; the needs of the session evolve, but I keep the old trivia in there, filling up time and distracting my students.

It's hard.

The specific case in question here is a session in my programming languages course called Creating New Syntax. The session has the modest goal of introducing students to the idea of using macros and other tools to define new syntactic constructs in a language. My students come into the course with no Racket or Lisp experience, and only a few have enough experience with C/C++ that they may have seen its textual macros. My plan for this session is to expose them to a few ideas and then to demonstrate one of Racket's wonderful facilities for creating new syntax. Given the other demands of the course, we don't have time to go deep, only to get a taste.

[In my dreams, I sometimes imagine reorienting this part of my course around something like Matthew Butterick's Beautiful Racket... Maybe someday.]

Looking at my notes for this session on Friday, I remembered just how overloaded and distracting the session has become. Over the years, I've pared away the extraneous material on macros in Lisp and C, but now it has evolved to include too many ideas and incomplete examples of macros in Racket. Each by itself might make for a useful part of the story. Together, they pull attention here and there without ever closing the deal.

I feel like the story I've been telling is getting in the way of the one or two key ideas about this topic I want students to walk away from the course with. It's time to clean the session up -- to make some major changes -- and tell a more effective story.

The specific idea I seized upon on Friday is an old idea I've had in mind for a while but never tried: adding a Python-like for-loop:

    (for i in lst: (sqrt i))

[Yes, I know that Racket already has a fine set of for-loops! This is just a simple form that lets my students connect their fondness for Python with the topic at hand.]

This functional loop is equivalent to a Racket map expression:

    (map (lambda (i)
           (sqrt i))
	 lst)

We can write a simple list-to-list translator that converts the loop to an equivalent map:

    (define for-to-map
      (lambda (for-exp)
        (let ((var (second for-exp))
              (lst (fourth for-exp))
              (exp (sixth for-exp)))
          (list 'map
                (list 'lambda (list var) exp)
                lst))))

This code handles only the surface syntax of the new form. To add it to the language, we'd have to recursively translate the form. But this simple function alone demonstrates the idea of translational semantics, and shows just how easy it can be to convert a simple syntactic abstraction into an equivalent core form.

Racket, of course, gives us better options! Here is the same transformer using the syntax-rules operator:

    (define-syntax for-p
      (syntax-rules (in :)
        ( (for-p var in lst : exp)
            (map (lambda (var) exp) lst) )  ))

So easy. So powerful. So clear. And this does more than translate surface syntax in the form of a Racket list; it enables the Racket language processor to expand the expression in place and execute the result:

    > (for-p i in (range 0 10):
        (sqrt i))
    '(0
      1
      1.4142135623730951
      ...
      2.8284271247461903
      3)

This small example demonstrates the key idea about macros and syntax transformers that I want students to take from this session. I plan to open the session with for-p, and then move on to range-case, a more complex operator that demonstrates more of syntax-rules's range and power.

This sets me up for a fun session in a little over a week. I'm excited to see how it plays with students. Renewed simplicity and focus should help.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 12, 2023 9:00 AM

A Spectator to Phase Change

Robin Sloan speculates that language-learning models like ChatGPT have gone through a phase change in what they can accomplish.

AI at this moment feels like a mash-up of programming and biology. The programming part is obvious; the biology part becomes apparent when you see AI engineers probing their own creations the way scientists might probe a mouse in a lab.

Like so many people, I find my social media and blog feeds filled with ChatGPT and LLMs and DALL-E and ... speculation about what these tools mean for (1) the production of text and code, and (2) learning to write and program. A lot of that speculation is tinged with fear.

I admire Sloan's effort to be constructive in his approach to the uncertainty:

I've found it helpful, these past few years, to frame my anxieties and dissatisfactions as questions. For example, fed up with the state of social media, I asked: what do I want from the internet, anyway?

It turns out I had an answer to that question.

Where the GPT-alikes are concerned, a question that's emerging for me is:

What could I do with a universal function — a tool for turning just about any X into just about any Y with plain language instructions?

I admit that I am reacting to these developments slowly compared to many people. That's my style in most things: I am more likely to under-react to a change than to over-react, especially at the onset of the change. In this case, there is no chance of immediate peril, so waiting to see what happens as people use these tools seems like a reasonable reasonable. I haven't made any effort to use these tools actively (or even been moved to), so any speculating I do would be uninformed by personal experience.

Instead, I read as people whose work I respect experiment with these tools and try to make sense of them. Occasionally, I draw a small, tentative conclusion, such as that prompting these generators is a lot like prompting students. After a few months of reading and a little reflection, I still think the biggest risk we face is probably that we tend to change the world around us to accommodate our technologies. If we put these tools to work for us in ways that enhance what we do, then the accommodation will pay off. If not, then we may, as Daniel Steinberg wrote in one of his newsletters, stop asking the questions we want to ask and start asking only the questions these tools can answer.

Professionally, I think most about the effect that ChatGPT and its ilk will have on programming and CS education. In these regards, I've been paying special attention to reports from David Humphrey, such as this blog post on his university's attempt to grapple the widespread availability of these tools. David has approached OpenAI with an open mind and written thoughtfully about the promise and the risk. For example, he has written a lot of code with an LLM assistant and found it improving his ability both to write code and to think about problems. Advanced CS students can benefit from this kind of assistance, too, but David wonders how such tools might interfere with students first learning to program.

What do we educators want from generative programming tools anyway? What do I as a programmer and educator want from them?

So, at this point, my personal interaction with the phase change that Sloan describes has been mostly passive: I read about what others are doing and think about the results of their exploration. Perhaps this post is a guilty conscience asserting that I should be doing more. Really, though, I think of it more as an active form of inaction: an effort to collect some of my thinking as I slowly respond to the changes that are coming. Perhaps some day soon I will feel moved to use of these tools as I write code of my own. For now, though, I am content to watch from the sidelines. You can learn a lot just by watching.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Software Development, Teaching and Learning

March 05, 2023 9:47 AM

Context Matters

In this episode of Conversations With Tyler, Cowen asks economist Jeffrey Sachs if he agrees with several other economists' bearish views on a particular issue. Sachs says they "have been wrong ... for 20 years", laughs, and goes on to say:

They just got it wrong time and again. They had failed to understand, and the same with [another economist]. It's the same story. It doesn't fit our model exactly, so it can't happen. It's got to collapse. That's not right. It's happening. That's the story of our time. It's happening.

"It doesn't fit our model, so it can't happen." But it is happening.

When your model keeps saying that something can't happen, but it keeps happening anyway, you may want to reconsider your model. Otherwise, you may miss the dominant story of the era -- not to mention being continually wrong.

Sachs spends much of his time with Cowen emphasizing the importance of context in determining which model to use and which actions to take. This is essential in economics because the world it studies is simply too complex for the models we have now, even the complex models.

I think Sachs's insight applies to any discipline that works with people, including education and software development.

The topic of education even comes up toward the end of the conversation, when Cowen asks Sachs how to "fix graduate education in economics". Sachs says that one of its problems is that they teach econ as if there were "four underlying, natural forces of the social universe" rather than studying the specific context of particular problems.

He goes on to highlight an approach that is affecting every discipline now touched by data analytics:

We have so much statistical machinery to ask the question, "What can you learn from this dataset?" That's the wrong question because the dataset is always a tiny, tiny fraction of what you can know about the problem that you're studying.

Every interesting problem is bigger than any dataset we build from it. The details of the problem matter. Again: context. Sachs suggests that we shouldn't teach econ like physics, with Maxwell's overarching equations, but like biology, with the seemingly arbitrary details of DNA.

In my mind, I immediately began thinking about my discipline. We shouldn't teach software development (or econ!) like pure math. We should teach it as a mix of principles and context, generalities and specific details.

There's almost always a tension in CS programs between timeless knowledge and the details of specific languages, libraries, and tools. Most of students don't go on to become theoretical computer scientists; they go out to work in the world of messy details, details that keep evolving and morphing into something new.

That makes our job harder than teaching math or some sciences because, like economics:

... we're not studying a stable environment. We're studying a changing environment. Whatever we study in depth will be out of date. We're looking at a moving target.

That dynamic environment creates a challenge for those of us teaching software development or any computing as practiced in the world. CS professors have to constantly be moving, so as not to fall our of date. But they also have to try to identify the enduring principles that their students can count on as they go on to work in the world for several decades.

To be honest, that's part of the fun for many of us CS profs. But it's also why so many CS profs can burn out after 15 or 20 years. A never-ending marathon can wear anyone out.

Anyway, I found Cowens' conversation with Jeffrey Sachs to be surprisingly stimulating, both for thinking about economics and for thinking about software.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 18, 2023 11:16 AM

CS Students Should Take Their Other Courses Seriously, Too

(These days, I'm posting a bit more on Mastodon. It has a higher character limit than Twitter, so I sometimes write longer posts, including quoted passages. Those long posts start to blur in length and content with blog posts. In an effort to blog more, and to preserve writing that may have longer value than a social media post provides, I may start capturing threads there as blog posts here. This post originated as a Mastodon post last week.)

~~~~~

This post contains one of the reasons I tell prospective CS students to take their humanities and social science courses seriously:

In short, the key skill for making sense of the world of information is developing the ability to accurately and neutrally summarize some body of information in your own words.

The original poster responded that it wasn't until going back to pursue a master's degree in library and information science that this lesson hit home for him.

I always valued my humanities and social science courses, both because I enjoyed them and because they let me practice valuable skills that my CS and math courses didn't exercise. But this lesson hit home for me in a different way after I became a professor.

Over my years teaching, I've seen students succeed and struggle in a lot of different ways. The ability to read and synthesize information with facility is one of the main markers of success, one of the things that can differentiate between students who do well and those who don't. It's also hard to develop this skill after students get to college. Nearly every course and major depends on it, even technical courses, even courses and majors that don't foreground this kind of skill. Without it, students keep falling farther behind. It's hard to develop the skill quickly enough to not fall so far behind that success feels impossible.

So, kids and parents, when you ask how to prepare to succeed as a CS student while in high school, one of my answers will almost always be: take four years of courses in every core area, including English and social science. The skills you develop and practice there will pay off many-fold in college.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 11, 2023 1:53 PM

What does it take to succeed as a CS student?

Today I received an email message similar to this:

I didn't do very well in my first semester, so I'm looking for ways to do better this time around. Do you have any ideas about study resources or tips for succeeding in CS courses?

As an advisor, I'm occasionally asked by students for advice of this sort. As department head, I receive even more queries, because early on I am the faculty member students know best, from campus visits and orientation advising.

When such students have already attempted a CS course or two, my first step is always to learn more about their situation. That way, I can offer suggestions suited to their specific needs.

Sometimes, though, the request comes from a high school student, or a high school student's parent: What is the best way to succeed as a CS student?

To be honest, most of the advice I give is not specific to a computer science major. At a first approximation, what it takes to succeed as a CS student is the same as what it takes to succeed as a student in any major: show up and do the work. But there are a few things a CS student does that are discipline-specific, most of which involve the tools we use.

I've decided to put together a list of suggestions that I can point our students to, and to which I can refer occasionally in order to refresh my mind. My advice usually includes one or all of these suggestions, with a focus on students at the beginning of our program:

  • Go to every class and every lab session. This is #0 because it should go without saying, but sometimes saying it helps. Students don't always have to go to their other courses every day in order to succeed.

  • Work steadily on a course. Do a little work on your course, both programming and reading or study, frequently -- every day, if possible. This gives your brain a chance to see patterns more often and learn more effectively. Cramming may help you pass a test, but it doesn't usually help you learn how to program or make software.

  • Ask your professor questions sooner rather than later. Send email. Visit office hours. This way, you get answers sooner and don't end up spinning your wheels while doing homework. Even worse, feeling confused can lead you to shying away from doing the work, which gets in the way of #1.

  • Get to know your programming environment. When programming in Python, simply feeling comfortable with IDLE, and with the file system where you store your programs and data, can make everything else seem easier. Your mind doesn't have to look up basic actions or worry about details, which enables you to make the most of your programming time: working on the assigned task.

  • Spend some of your study time with IDLE open. Even when you aren't writing a program, the REPL can help you! It lets you try out snippets of code from your reading, to see them work. You can run small experiments of your own, to see whether you understand syntax and operators correctly. You can make up your own examples to fill in the gaps in your understanding of the problem.

    Getting used to trying things out in the interactions window can be a huge asset. This is one of the touchstones of being a successful CS student.

That's what came to mind at the end of a Friday, at the end of a long week, when I sat down to give advice to one student. I'd love to hear your suggestions for improving the suggestions in my list, or other bits of advice that would help our students. Email me your ideas, and I'll make my list better for anyone who cares to read it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

January 29, 2023 7:51 AM

A Thousand Feet of Computing

Cory Doctorow, in a recent New Yorker interview reminisces about learning to program. The family had a teletype and modem.

My mom was a kindergarten teacher at the time, and she would bring home rolls of brown bathroom hand towels from the kid's bathroom at school, and we would feed a thousand feet of paper towel into the teletype and I would get a thousand feet of computing after school at the end of the day.

Two things:

  • Tsk, tsk, Mom. Absconding with school supplies, even if for a noble cause! :-) Fortunately, the statute of limitations on pilfering paper hand towels has likely long since passed.

  • I love the idea of doing "a thousand feet of computing" each day. What a wonderful phrase. With no monitor, the teletype churns out paper for every line of code, and every line the code produces. You know what they say: A thousand feet a day makes a happy programmer.

The entire interview is a good read on the role of computing in modern society. The programmer in me also resonated with this quote from Doctorow's 2008 novel, Little Brother:

If you've never programmed a computer, there's nothing like it in the whole world. It's awesome in the truest sense: it can fill you with awe.

My older daughter recommended Little Brother to me when it first came out. I read many of her recommendations promptly, but for some reason this one sits on my shelf unread. (The PDF also sits in my to-read/ folder, unread.) I'll move it to the top of my list.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 18, 2023 2:46 PM

Prompting AI Generators Is Like Prompting Students

Ethan Mollick tells us how to generate prompts for programs like ChatGPT and DALL-E: give direct and detailed instructions.

Don't ask it to write an essay about how human error causes catastrophes. The AI will come up with a boring and straightforward piece that does the minimum possible to satisfy your simple demand. Instead, remember you are the expert and the AI is a tool to help you write. You should push it in the direction you want. For example, provide clear bullet points to your argument: write an essay with the following points: -Humans are prone to error -Most errors are not that important -In complex systems, some errors are catastrophic -Catastrophes cannot be avoided

But even the results from such a prompt are much less interesting than if we give a more explicit prompt. Fo instance, we might add:

use an academic tone. use at least one clear example. make it concise. write for a well-informed audience. use a style like the New Yorker. make it at least 7 paragraphs. vary the language in each one. end with an ominous note.

This reminds me of setting essay topics for students, either for long-form writing or for exams. If you give a bland uninteresting question, you will generally get a bland uninteresting answer. Such essays are hard to evaluate. A squooshy question allows the student to write almost anything in response. Students are usually unhappy in this scenario, too, because they don't know what you want them to write, or how they will be evaluated.

Asking a human a more specific question has downsides, though. It increases the cognitive load placed on them, because there are more things for them to be thinking about as they write. Is my tone right? Does this sound like the New Yorker? Did I produce the correct number of paragraphs? Is my essay factually accurate? (ChatGPT doesn't seem to worry about this one...) The tradeoff is clearer expectations. Many students prefer this trade, at least on longer form assignments when they have time to consider the specific requests. A good spec reduces uncertainty.

Maybe these AI programs are closer to human than we think after all. (Some people don't worry much about correctness either.)

~~~~

On a related note: As I wrote elsewhere, I refuse to call ChatGPT or any program "an AI". The phrase "artificial intelligence" is not a term for a specific agent; it is the name of an idea. Besides, none of these programs are I yet, only A.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 15, 2023 12:07 PM

You Can Learn A Lot About People Just By Talking With Them

This morning on the exercise bike, I read a big chunk of Daniel Gross and Tyler Talk Talent, from the Conversations with Tyler series. The focus of this conversation is how to identify talent, as prelude to the release of their book on that topic.

The bit I've read so far has been like most Conversations with Tyler: interesting ideas with Cowen occasionally offering an offbeat idea seemingly for the sake of being offbeat. For example, if the person he is interviewing has read Shakespeare, he might say,

"Well, my hypothesis is that in Romeo and Juliet, Romeo and Juliet don't actually love each other at all. Does the play still make sense?" Just see what they have to say. It's a way of testing their second-order understanding of situations, diversity of characters.

This is a bit much for my taste, but the motivating idea behind talking to people about drama or literature is sound:

It's not something they can prepare for. They can't really fake it. If they don't understand the topic, well, you can switch to something else. But if you can't find anything they can understand, you figure, well, maybe they don't have that much depth or understanding of other people's characters.

It seems to me that this style of interviewing runs a risk of not being equitable to all candidates, and at the very least places high demands on both the interviewee and the interviewer. That said, Gross summarizes the potential value of talking to people about movies, music, and other popular culture in interviews:

I think that works because you can learn a lot from what someone says -- they're not likely to make up a story -- but it's also fun, and it is a common thing many people share, even in this era of HBO and Netflix.

This exchange reminded me of perhaps my favorite interview of all time, one in which I occupied the hot seat.

I was a senior in high school, hoping to study architecture at Ball State University. (Actual architecture... the software thing would come later.) I was a pretty good student, so I applied for Ball State's Whitinger Scholarship, one of the university's top awards. My initial application resulted in me being invited to campus for a personal interview. First, I sat to write an essay over the course of an hour, or perhaps half an hour. To be honest, I don't remember many details from that part of the day, only sitting in a room by myself for a while with a blue book and writing away. I wrote a lot of essays in those days.

Then I met with Dr. Warren Vander Hill, the director of the Honors College, for an interview. I'd had a few experiences on college campuses in the previous couple of years, but I still felt a little out of my element. Though I came from a home that valued reading and learning, my family background was not academic.

On a shelf behind Dr. Vander Hill, I noticed a picture of him in a Hope College basketball jersey, dribbling during a college game. I casually asked him about it and learned that he had played Division III varsity ball as an undergrad. I just now searched online in hopes of confirming my memory and learned that he is still #8 on the list of Hope's all-time career scoring leaders. I don't recall him slipping that fact into our chat... (Back then, he would have been #2!)

Anyway, we started talking basketball. Somehow, the conversation turned to Oscar Robertson, one of the NBA's all-time great players. He starred at Indianapolis's all-black Crispus Attucks High School and led the school to a state championship in 1955. From there, we talked about a lot of things -- the integration of college athletics, the civil rights movement, the state of the world in 1982 -- but it all seemed to revolve around basketball.

The time flew. Suddenly, the interview period was over, and I headed home. I enjoyed the conversation quite a bit, but on the hour drive, I wondered if I'd squandered my chances at the scholarship by using my interview time to talk sports. A few weeks later, though, I received a letter saying that I had been selected as one of the recipients.

That was the beginning of four very good years for me. Maybe I can trace some of that fortune to a conversation about sports. I certainly owe a debt to the skill of the person who interviewed me.

I got to know Dr. Vander Hill better over the next four years and slowly realized that he had probably known exactly what he was doing in that interview. He had found a common interest we shared and used it to start a conversation that opened up into bigger ideas. I couldn't have prepared answers for this conversation. He could see that I wasn't making up a story, that I was genuinely interested in the issues we were discussing and was, perhaps, genuinely interesting. The interview was a lot of fun, for both of us, I think, and he learned a lot about me from just talking.

I learned a lot from Dr. Vander Hill over the years, though what I learned from him that day took a few years to sink in.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Personal, Teaching and Learning

January 10, 2023 2:20 PM

Are Major Languages' Parsers Implemented By Hand?

Someone on Mastodon posted a link to a 2021 survey of how the parsers for major languages are implemented. Are they written by hand, or automated by a parser generator? The answer was mixed: a few are generated by yacc-like tools (some of which were custom built), but many are written by hand, often for speed.

My two favorite notes:

Julia's parser is handwritten but not in Julia. It's in Scheme!

Good for the Julia team. Scheme is a fine language in which to write -- and maintain -- a parser.

Not only [is Clang's parser] handwritten but the same file handles parsing C, Objective-C and C++.

I haven't clicked through to the source code for Clang yet but, wow, that must be some file.

Finally, this closing comment in the paper hit close to the heart:

Although parser generators are still used in major language implementations, maybe it's time for universities to start teaching handwritten parsing?

I have persisted in having my compiler students write table-driven parsers by hand for over fifteen years. As I noted in this post at the beginning of the 2021 semester, my course is compilers for everyone in our major, or potentially so. Most of our students will not write another compiler in their careers, and traditional skills like implementing recursive descent and building a table-driven program are valuable to them more generally than knowing yacc or bison. Any of my compiler students who do eventually want to use a parser generator are well prepared to learn how, and they'll understand what's going on when they do, to boot.

My course is so old school that it's back to the forefront. I just had to be patient.

(I posted the seeds of this entry on Mastodon. Feel free to comment there!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

January 09, 2023 12:19 PM

Thoughts on Software and Teaching from Last Week's Reading

I'm trying to get back into the habit of writing here more regularly. In the early days of my blog, I posted quick snippets every so often. Here's a set to start 2023.

• Falsework

From A Bridge Over a River Never Crossed:

Funnily enough, traditional arch bridges were built by first having a wood framing on which to lay all the stones in a solid arch (YouTube). That wood framing is called falsework, and is necessary until the arch is complete and can stand on its own. Only then is the falsework taken away. Without it, no such bridge would be left standing. That temporary structure, even if no trace is left of it at the end, is nevertheless critical to getting a functional bridge.

Programmers sometimes write a function or an object that helps them build something else that they couldn't easily have built otherwise, then delete the bridge code after they have written the code they really wanted. A big step in the development of a student programmer is when they do this for the first time, and feel in their bones why it was necessary and good.

• Repair as part of the history of an object

From The Art of Imperfection and its link back to a post on making repair visible, I learned about Kintsugi, a practice in Japanese art...

that treats breakage and repair as part of the history of an object, rather than something to disguise.

I have this pattern around my home, at least on occasion. I often repair my backpack, satchel, or clothing and leave evidence of the repair visible. My family thinks it's odd, but figure it's just me.

Do I do this in code? I don't think so. I tend to like clean code, with no distractions for future readers. The closest thing to Kintsugi I can think of now are comments that mention where some bit of code came from, especially if the current code is not intuitive to me at the time. Perhaps my memory is failing me, though. I'll be on the watch for this practice as I program.

• "It is good to watch the master."

I've been reading a rundown of the top 128 tennis players of the last hundred years, including this one about Pancho Gonzalez, one of the great players of the 1940s, '50s, and '60s. He was forty years old when the Open Era of tennis began in 1968, well past his prime. Even so, he could still compete with the best players in the game.

Even his opponents could appreciate the legend in their midst. Denmark's Torben Ulrich lost to him in five sets at the 1969 US Open. "Pancho gives great happiness," he said. "It is good to watch the master."

The old masters give me great happiness, too. With any luck, I can give a little happiness to my students now and then.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Software Development, Teaching and Learning

January 05, 2023 12:15 PM

A Family of Functions from a Serendipitous Post, and Thoughts about Teaching

Yesterday, Ben Fulton posted on Mastodon:

TIL: C++ has a mismatch algorithm that returns the first non-equal pair of elements from two sequences. ...

C++'s mismatch was new to me, too, so I clicked through to the spec on cppreference.com to read a bit more. I learned that mismatch is an algorithm implemented as as a template function with several different signatures. My thoughts turned immediately to my spring course, Programming Languages, which starts with an introduction to Racket and functional programming. mismatch would make a great example or homework problem for my students, as they learn to work with Racket lists and functions! I stopped working on what I was doing and used the C++ spec to draw up a family of functions for my course:

    ; Return the first mismatching pair of elements from two lists.
    ; Compare using eq?.
    ;   (mismatch lst1 lst2)
    ;
    ; Compare using a given binary predicate comp?.
    ;   (mismatch comp? lst1 lst2)
    ;
    ; Compare using a given binary predicate comp?,
    ; as a higher-order function.
    ;   ((make-mismatch comp?) lst1 lst2)
    ;
    ; Return the first mismatching pair of elements from two ranges,
    ; also as a higher-order function.
    ; If last2 is not provided, it denotes first2 + (last1 - first1).
    ;   (make-mismatch first1 last1 first2 [last2]) -> (f lst1 lst2)

Of course, this list is not exhaustive, only a start. With so many related possibilities, mismatch will make a great family of examples or homework problems for the course! What a fun distraction from the other work in my backlog.

Ben's post conveniently arrived in the middle of an email discussion with the folks who teach our intro course, about ChatGPT and the role it will play in Intro. I mentioned ChatGPT in a recent post suggesting that we all think about tools like ChatGPT and DALL-E from the perspective of cultural adaptation: how do we live with new AI tools knowing that we change our world to accommodate our technologies? In that post, I mentioned only briefly the effect that these tools will have on professors, their homework assignments, and the way we evaluate student competencies and performance. The team preparing to teach Intro this spring has to focus on these implications now because they affect how the course will work. Do we want to mitigate the effects of ChatGPT and, if so, how?

I think they have decided mostly to take a wait-and-see approach this semester. We always have a couple of students who do not write their own code, and ChatGPT offers them a new way not to do so. When we think students have not written the code they submitted, we talk with them. In particular, we discuss the code and ask the student to explain or reason about it.

Unless the presence of ChatGPT greatly increases the number of students submitting code they didn't write, this approach should continue to work. I imagine we will be fine. Most students want to learn; they know that writing code is where they learn the most. I don't expect that access to ChatGPT is going to change the number of students taking shortcuts, at least not in large numbers. Let's trust our students as we keep a watchful eye out for changes in behavior.

The connection between mismatch and the conversation about teaching lies in the role that a family of related functions such as mismatch can play in building a course that is more resistant to the use of AI assistants in a way that harms student learning. I already use families of related function specs as a teaching tool in my courses, for purely pedagogical reasons. Writing different versions of the same function, or seeing related functions used to solve slightly different problems, is a good way to help students deepen understanding of an idea or to help them make connections among different concepts. My mismatches give me another way to help students in Programming Languages learn about processing lists, passing functions as arguments, returning functions as values, and accepting a variable number of arguments. I'm curious to see how this family of functions works for students.

A set of related functions also offers a tool both for helping professors determine whether students have learned to write code. We already ask students in our intro course to modify code. Asking students to convert a function with one spec into a function with a slightly different spec, like writing different versions of the same function, give them the chance benefit from their understanding the existing code. It is easier for a programmer to modify a function if they understand it. The existing code is a scaffold that enables the student to focus on the single feature or concept they need to write the new code.

Students who have not written code like the code they are modifying have a harder time reading and modifying the given code, especially when operating under any time or resource limitation. In a way, code modification exercises do something simpler to asking students to explain code to us: the modification task exposes when students don't understand code they claim to have written.

Having ChatGPT generate a function for you won't be as valuable if you will soon be asked to explain the code in detail or to modify the code in a way that requires you understand it. Increasing the use of modification tasks is one way to mitigate the benefits of a student having someone else write the code for them. Families of functions such as mismatch above are a natural source of modification tasks.

Beyond the coming semester, I am curious how our thinking about writing code will evolve in the presence of ChatGPT-like tools. Consider the example of auto-complete facilities in our editors. Few people these days think of using auto-complete as cheating, but when it first came out many professors were concerned that using auto-complete was a way for students not to learn function signatures and the details of standard libraries. (I'm old enough to still have a seed of doubt about auto-complete buried somewhere deep in my mind! But that's just me.)

If LLM-based tools become the new auto-complete, one level up from function signatures, then how we think about programming will probably change. Likewise how we think about teaching programming... or not. Did we change how we teach much as a result of auto-complete?

The existence of ChatGPT is a bit disconcerting for today's profs, but the long-term implications are kind of interesting.

In the meantime, coming across example generators like C++'s mismatch helps me deal with the new challenge and gives me unexpected fun writing code and problem descriptions.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 30, 2022 12:40 PM

What Can Universities Learn from Barnes & Noble's Surprising Turnaround?

A recent issue of Ted Gioia's newsletter asks, What can we learn from Barnes & Noble's surprising turnaround? In one sentence, the answer is:

If you want to sell books, you must love those books.

Perhaps we can apply Barnes & Noble's lesson to education.

If anything will save the mid-sized comprehensive university in the face of changing demographics and state funding, it will likely be this:

If you want to teach students, you must love both teaching and students.

Comprehensive universities (regional universities that focus on undergraduate education) are caught in the middle between large research-focused schools and small, mostly private schools. They try to offer the best of both worlds, without having the resources that buttress those other school's operation. The big research schools have external research funding, large media contracts for their athletics programs, and primacy of place in the minds of potential students. The small private schools offer the "small school experience", often to targeted audiences of students and often with considerable endowments and selective admissions that heighten the appeal.

Mid-sized comprehensives are unsung jewels in many states, but economic changes make it harder to serve their mission than it was forty or even twenty years ago. They don't have much margin for error. What are they to do? As Barnes & Noble is demonstrating, the key to success for a bookstore is to put books and readers first. For the comprehensives, the key to success is almost certainly to put students and teaching first.

Other lessons from the Barnes & Noble turnaround may help, too. For example, in tough economic times, universities tend to centralize resources and decision making, in the name of focus and efficiency. However, decentralization empowers those closest to the students to meet the needs of the students in each academic disciplines. When given the chance, faculty and staff in the academic departments need to take this responsibility seriously. But then, most faculty are at comprehensives precisely because they want to work with undergraduates. The key element to it all, though, is putting students and teaching first, and everything else second.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 22, 2022 1:21 PM

The Ability to Share Partial Results Accelerated Modern Science

This passage is from Lewis Thomas's The Lives of a Cell, in the essay "On Societies as Organisms":

The system of communications used in science should provide a neat, workable model for studying mechanisms of information-building in human society. Ziman, in a recent "Nature" essay, points out, "the invention of a mechanism for the systematic publication of fragments of scientific work may well have been the key event in the history of modern science." He continues:
A regular journal carries from one research worker to another the various ... observations which are of common interest. ... A typical scientific paper has never pretended to be more than another little piece in a larger jigsaw -- not significant in itself but as an element in a grander scheme. The technique of soliciting many modest contributions to the store of human knowledge has been the secret of Western science since the seventeenth century, for it achieves a corporate, collective power that is far greater than any one individual can exert [italics mine].

In the 21st century, sites like arXiv lowered the barrier to publishing and reading the work of other scientists further. So did blogs, where scientists could post even smaller, fresher fragments of knowledge. Blogs also democratized science, by enabling scientists to explain results for a wider audience and at greater length than journals allow. Then came social media sites like Twitter, which made it even easier for laypeople and scientists in other disciplines to watch -- and participate in -- the conversation.

I realize that this blog post quotes an essay that quotes another essay. But I would never have seen the Ziman passage without reading Lewis. Perhaps you would not have seen the Lewis passage without reading this post? When I was in college, the primary way I learned about things I didn't read myself was by hearing about them from classmates. That mode of sharing puts a high premium on having the right kind of friends. Now, blogs and social media extend our reach. They help us share ideas and inspirations, as well as helping us to collaborate on science.

~~~~

I first mentioned The Lives of a Cell a couple of weeks ago, in If only ants watched Netflix.... This post may not be the last to cite the book. I find something quotable and worth further thought every few pages.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

December 11, 2022 9:09 AM

Living with AI in a World Where We Change the World to Accommodate Our Technologies

My social media feeds are full of ChatGPT screenshots and speculation these days, as they have been with LLMs and DALL-E and other machine learning-based tools for many months. People wonder what these tools will mean for writers, students, teachers, artists, and anyone who produces ordinary text, programs, and art.

These are natural concerns, given their effect on real people right now. But if you consider the history of human technology, they miss a bigger picture. Technologies often eliminate the need for a certain form of human labor, but they just as often create a new form of human labor. And sometimes, they increase the demand for the old kind of labor! If we come to rely on LLMs to generate text for us, where will we get the text with which to train them? Maybe we'll need people to write even more replacement-level prose and code!

As Robin Sloan reminds us in the latest edition of his newsletter, A Year of New Avenues, we redesign the world to fit the technologies we create and adopt.

Likewise, here's a lesson from my work making olive oil. In most places, the olive harvest is mechanized, but that's only possible because olive groves have been replanted to fit the shape of the harvesting machines. A grove planted for machine harvesting looks nothing like a grove planted for human harvesting.

Which means that our attention should be on how programs like GPT-2 might lead us to redesign the world we live and work in better to accommodate these new tools:

For me, the interesting questions sound more like
  • What new or expanded kinds of human labor might AI systems demand?
  • What entirely new activities do they suggest?
  • How will the world now be reshaped to fit their needs?
That last question will, on the timescale of decades, turn out to be the most consequential, by far. Think of cars ... and of how dutifully humans have engineered a world just for them, at our own great expense. What will be the equivalent, for AI, of the gas station, the six-lane highway, the parking lot?

Many professors worry that ChatGPT makes their homework assignments and grading rubrics obsolete, which is a natural concern in the short run. I'm old enough that I may not live to work in a world with the AI equivalent of the gas station, so maybe that world seems too far in the future to be my main concern. But the really interesting questions for us to ask now revolve around how tools such as these will lead us to redesign our worlds to accommodate and even serve them.

Perhaps, with a little thought and a little collaboration, we can avoid engineering a world for them at our own great expense. How might we benefit from the good things that our new AI technologies can provide us while sidestepping some of the highest costs of, say, the auto-centric world we built? Trying to answer that question is a better long-term use of our time and energy that fretting about our "Hello, world!" assignments and advertising copy.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 03, 2022 2:34 PM

Why Teachers Do All That Annoying Stuff

Most people, when they become teachers, tell themselves that they won't do all the annoying things that their teachers did. If they teach for very long, though, they almost all find themselves slipping back to practices they didn't like as a student but which they now understand from the other side of the fence. Dynomight has written a nice little essay explaining why. Like deadlines. Why have deadlines? Let students learn and work at their own pace. Grade what they turn in, and let them re-submit their work later to demonstrate their newfound learning.

Indeed, why not? Because students are clever and occasionally averse to work. A few of them will re-invent a vexing form of the ancient search technique "Generate and Test". From the essay:

  1. Write down some gibberish.
  2. Submit it.
  3. Make a random change, possibly informed by feedback on the last submission.
  4. Resubmit it. If the grade improved, keep it, otherwise revert to the old version.
  5. Goto 3.

You may think this is a caricature, but I see this pattern repeated even in the context of weekly homework assignments. A student will start early and begin a week-long email exchange where they eventually evolve a solution that they can turn in when the assignment is due.

I recognize that these students are responding in a rational way to the forces they face: usually, uncertainty and a lack of the basic understanding needed to even start the problem. My strategy is to try to engage them early on in the conversation in a way that helps them build that basic understanding and to quiet their uncertainty enough to make a more direct effort to solve the problem.

Why even require homework? Most students and teachers want for grades to reflect the student's level of mastery. If we eliminate homework, or make it optional, students have the opportunity to demonstrate their mastery on the final exam or final project. Why indeed? As the essay says:

But just try it. Here's what will happen:
  1. Like most other humans, your students will be lazy and fallible.
  2. So many of them will procrastinate and not do the homework.
  3. So they won't learn anything.
  4. So they will get a terrible grade on the final.
  5. And then they will blame you for not forcing them to do the homework.

Again, the essay is written in a humorous tone that exaggerates the foibles and motivations of students. However, I have been living a variation of this pattern in my compilers course over the last few years. Here's how things have evolved.

I assign the compiler project as six stages of two weeks each. At the end of the semester, I always ask students for ways I might improve the course. After a few years teaching the course, students began to tell me that they found themselves procrastinating at the start of each two-week cycle and then having to scramble in the last few days to catch up. They suggested I require future students to turn something in at the end of the first week, as a way to get them started working sooner.

I admired their self-awareness and added a "status check" at the midpoint of each two-week stage. The status check was not to be graded, but to serve as a milepost they could aim for in completing that cycle's work. The feedback I provided, informal as it was, helped them stay course, or get back on course, if they had missed something important.

For several years, this approach worked really well. A few teams gamed the system, of course (see generate-and-test above), but by and large students used the status checks as intended. They were able to stay on track time-wise and to get some early feedback that helped them improve their work. Students and professor alike were happy.

Over the last couple of years, though, more and more teams have begun to let the status checks slide. They are busy, overburdened in other courses or distracted by their own projects, and ungraded work loses priority. The result is exactly what the students who recommended the status checks knew would happen: procrastination and a mad scramble in the last few days of the stage. Unfortunately, this approach can lead a team to fall farther and farther behind with each passing stage. It's hard to produce a complete working compiler under these conditions.

Again, I recognize that students usually respond in a rational way to the forces they face. My job now is to figure out how we might remove those forces, or address them in a more productive way. I've begun thinking about alternatives, and I'll be debriefing the current offering of the course with my students over the next couple of weeks. Perhaps we can find something that works better for them.

That's certainly my goal. When a team succeeds at building a working compiler, and we use it to compile and run an impressive program -- there's no feeling quite as joyous for a programmer, or a student, or a professor. We all want that feeling.

Anyway, check out the full essay for an entertaining read that also explains quite nicely that teachers are usually responding in a rational way to the forces they face, too. Cut them a little slack.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Patterns, Teaching and Learning

November 20, 2022 9:00 AM

Beautiful Ideas Deserve Beautiful Presentation

From Matthew Butterick, of Beautiful Racket fame:

I always think about—and encourage others to think about—how design and typography can complement the underlying object by finding a way to communicate its virtues. If your writing contains beautiful ideas, then your presentation of that writing should be likewise beautiful.

I am old-fashioned, perhaps, in feeling this way about software. A program that embodies beautiful ideas should itself be beautifully designed, and beautifully presented. The universe calls us in this direction.

The above passage comes from a charming essay on how Butterick ended up living a life of typography, in addition to his skills as a programmer and a lawyer. Spoiler: the title "Power, Corruption & Lies" refers not to any political intrigue but to an album by the rock band New Order.

Butterick is also one of the principals pursuing action against GitHub Copilot for violating the terms of the open-source licenses on the software it mined to build the tool. Sometimes, programmers have to deal with things less beautiful than the code we like to write, in order to protect that code and its creators.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 30, 2022 9:32 AM

Recognize

From Robin Sloan Sloan's newsletter:

There was a book I wanted very badly to write; a book I had been making notes toward for nearly ten years. (In my database, the earliest one is dated December 13, 2013.) I had not, however, set down a single word of prose. Of course I hadn't! Many of you will recognize this feeling: your "best" ideas are the ones you are most reluctant to realize, because the instant you begin, they will drop out of the smooth hyperspace of abstraction, apparate right into the asteroid field of real work.

I can't really say that there is a book I want very badly to write. In the early 2000s I worked with several colleagues on elementary patterns, and we brainstormed writing an intro CS textbook built atop a pattern language. Posts from the early days of this blog discuss some of this work from ChiliPLoP, I think. I'm not sure that such a textbook could ever have worked in practice, but I think writing it would have been a worthwhile experience anyway, for personal growth. But writing such a book takes a level of commitment that I wasn't able to make.

That experience is one of the reasons I have so much respect for people who do write books.

While I do not have a book for which I've been making notes in recent years, I do recognize the feeling Sloan describes. It applies to blog posts and other small-scale writing. It also applies to new courses one might create, or old courses one might reorganize and teach in a very different way.

I've been fortunate to be able to create and re-create many courses over my career. I also have some ideas that sit in the back of my mind because I'm a little concerned about the commitment they will require, the time and the energy, the political wrangling. I'm also aware that the minute I begin to work on them, they will no longer be perfect abstractions in my mind; they will collide with reality and require compromises and real work.

(TIL I learned the word "apparate". I'm not sure how I feel about it yet.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Personal, Teaching and Learning

October 23, 2022 9:54 AM

Here I Go Again: Carmichael Numbers in Graphene

I've been meaning to write a post about my fall compilers course since the beginning of the semester but never managed to set aside time to do anything more than jot down a few notes. Now we are at the end of Week 9 and I just must write. Long-time readers know what motivates me most: a fun program to write in my student's source language!

TFW you run across a puzzle and all you want to do now is write a program to solve it. And teach your students about the process.
-- https://twitter.com/wallingf/status/1583841233536884737

Yesterday, it wasn't a puzzle so much as discovering a new kind of number, Carmichael numbers. Of course, I didn't discover them (neither did Carmichael, though); I learned of them from a Quanta article about a recent proof about these numbers that masquerade as primes. One way of defining this set comes from Korselt:

A positive composite integer n is a Carmichael number if and only if it has multiple prime divisors, no prime divisor repeats, and for each prime divisor p, p-1 divides n-1.

This definition is relatively straightforward, and I quickly imagined am imperative solution with a loop and a list. The challenge of writing a program to verify a number is a Carmichael number in my compiler course's source language is that it has neither of these things. It has no data structures or even local variables; only basic integer and boolean arithmetic, if expressions, and function calls.

Challenge accepted. I've written many times over the years about the languages I ask my students to write compilers for and about my adventures programming in them, from Twine last year through Flair a few years ago to a recurring favorite, Klein, which features prominently in popular posts about Farey sequences and excellent numbers.

This year, I created a new language, Graphene, for my students. It is essentially a small functional subset of Google's new language Carbon. But like it's predecessors, it is something of an integer assembly language, fully capable of working with statements about integers and primes. Korselt's description of Carmichael numbers is right in Graphene's sweet spot.

As I wrote in the post about Klein and excellent numbers, my standard procedure in cases like this is to first write a reference program in Python using only features available in Graphene. I must do this if I hope to debug and test my algorithm, because we do not have a working Graphene compiler yet! (I'm writing my compiler in parallel with my students, which is one of the key subjects in my phantom unwritten post.) I was proud this time to write my reference program in a Graphene-like subset of Python from scratch. Usually I write a Pythonic solution, using loops and variables, as a way to think through the problem, and then massage that code down to a program using a limited set of concepts. This time, I started with short procedural outline:

    # walk up the primes to n
    #   - find a prime divisor p:
    #     - test if a repeat         (yes: fail)
    #     - test if p-1 divides n-1  (no : fail)
    # return # prime divisors > 1
and then implemented it in a single recursive function. The first few tests were promising. My algorithm rejected many small numbers not in the set, and it correctly found 561, the smallest Carmichael number. But when I tested all the numbers up to 561, I saw many false positives. A little print-driven debugging found the main bug: I was stopping too soon in my search for prime divisors, at sqrt(n), due to some careless thinking about factors. Once I fixed that, boom, the program correctly handled all n up to 3000. I don't have a proof of correctness, but I'm confident the code is correct. (Famous last words, I know.)

As I tested the code, it occurred to me that my students have a chance to one-up standard Python. Its rather short standard stack depth prevented my program from reaching even n=1000. When I set sys.setrecursionlimit(5000), my program found the first five Carmichael numbers: 561, 1105, 1729, 2465, and 2821. Next come 6601 and 8911; I'll need a lot more stack frames to get there.

All those stack frames are unnecessary, though. My main "looping" function is beautifully tail recursive: two failure cases, the final answer case checking the number of prime divisors, and two tail-recursive calls that move either to the next prime as potential factor or to the next quotient when we find one. If my students implement proper tail calls -- an optimization that is optional in the course but encouraged by their instructor with gusto -- then their compiler will enable us to solve for values up to the maximum integer in the language, 231-1. We'll be limited only by the speed of the target language's VM, and the speed of the target code the compiler generates. I'm pretty excited.

Now I have to resist the urge to regale my students with this entire story, and with more details about how I approach programming in a language like Graphene. I love to talk shop with students about design and programming, but our time is limited... My students are already plenty busy writing the compiler that I need to run my program!

This lark resulted in an hour or so writing code in Python, a few more minutes porting to Graphene, and an enjoyable hour writing this blog post. As the song says, it was a good day.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 31, 2022 8:54 AM

Caring about something whittles the world down to a more manageable size

In The Orchid Thief, there is a passage where author Susan Orlean describes a drive across south Florida on her way to a state preserve, where she'll be meeting an orchid hunter. She ends the passage this way:

The land was marble-smooth and it rolled without a pucker to the horizon. My eyes grazed across the green band of ground and the blue bowl of sky and then lingered on a dead tire, a bird in flight, an old fence, a rusted barrel. Hardly any cars came toward me, and I saw no one in the rearview mirror the entire time. I passed so many vacant acres and looked past them to so many more vacant acres and looked ahead and behind at the empty road and up at the empty sky; the sheer bigness of the world made me feel lonely to the bone. The world is so huge that people are always getting lost in it. There are too many ideas and things and people, too many directions to go. I was starting to believe that the reason it matters to care passionately about something is that it whittles the world down to a more manageable size. It makes the world seem not huge and empty but full of possibility. If I had been an orchid hunter I wouldn't have see this space as sad-making and vacant--I think I would have seen it as acres of opportunity where the things I loved were waiting to be found.

John Laroche, the orchid hunter at the center of The Orchid Thief, comes off as obsessive, but I think many of us know that condition. We have found an idea or a question or a problem that grabs our attention, and we work on it for years. Sometimes, we'll follow a lead so far down a tunnel that it feels a lot like the swamps Laroche braves in search of the ghost orchid.

Even a field like computer science is big enough that it can feel imposing if a person doesn't have a specific something to focus their attention and energy on. That something doesn't have to be forever... Just as Laroche had cycled through a half-dozen obsessions before turning his energy to orchids, a computer scientist can work deeply in an area for a while and then move onto something else. Sometimes, there is a natural evolution in the problems one focuses on, while other times people choose to move into a completely different sub-area. I see a lot of people moving into machine learning these days, exploring how it can change the sub-field they used to focus exclusively on.

As a prof, I am fortunate to be able to work with young adults as they take their first steps in computer science. I get to watch many of them find a question they want to answer, a problem they want to work on for a few years, or an area they want to explore in depth until they master it. It's also sad, in a way, to work with a student who never quite finds something that sparks their imagination. A career in software, or anything, really, can look as huge and empty as Orlean's drive through south Florida if someone doesn't care deeply about something. When they do, the world seems not huge and empty, but full of possibility.

I'm about halfway through The Orchid Thief and am quite enjoying it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

July 28, 2022 11:48 AM

A "Teaching with Patterns" Workshop

the logo of PLoP 2022, a red block with block letters, PLoP in white and 2022 in black

Yesterday afternoon I attended a Teaching with Patterns workshop that is part of PLoP 2022.

When the pandemic hit in 2020, PLoP went virtual, as did so many conferences. It was virtual last year and will be again this year as well. On the spur of the moment, I attended a couple of the 2020 keynote talks online. Without a real commitment to the conference, though I let my day-to-day teaching and admin keep me from really participating. (You may recall that I did it right for StrangeLoop last year, as evidenced by several blog posts ending here.)

When I peeked in on PLoP virtually a couple of years ago, it had already been too many years away from my last PLoP, which itself came after too many years away! Which is to say: I miss PLoP and my friends and colleagues in the software patterns world.

PLoP 2022 itself is in October. Teaching with Patterns was a part of PLoPourri, a series of events scheduled throughout the year in advance of the conference proper. It was organized by Michael Weiss of Carleton University in Ottawa, a researcher on open source software.

Upon joining the Zoom session for the workshop, I was pleased to see Michael, a colleague I first met at PLoP back in the 1990s, and Pavel Hruby, whom I met at one of the first ChiliPLoPs many years ago. There were a handful of other folks participating, too, bringing experience from a variety of domains. One, Curt McNamara, has significant experience using patterns in his course on sustainable graphic design.

The workshop was a relaxed affair. First, participants shared their experiences teaching with patterns. Then we all discussed our various approaches and shared lessons we have learned. Michael gave a few opening remarks and then asked four questions to prime the conversation:

  • How do you use patterns when you teach a topic?
  • What format do you use to teach with patterns?
  • Do you teach with your own or existing patterns?
  • What works best/worst?
He gave us a few minutes to think about our answers and to post anything we liked to the workshop's whiteboard.

My answers may sound familiar to anyone who has read my blog long enough to hear me talk about the courses I teach and, going way back, to my work at PLoP and ChiliPLoP.

I use patterns to teach design techniques and trade-offs. It's easy to to lecture students on particular design solutions, but that's not very effective at helping students learn how to do design, to make choices based on the forces at play in any given problem. Patterns help me to think about the context in which a technique applies, the tradeoffs it helps us to make, and the state of their solution.

I rarely have students read patterns themselves, at least in one of the stylized pattern forms. Instead, I integrate the patterns into the stories I tell, into the conversations I have with my students about problems and solutions, and finally into the session notes I write up for them and me.

These days, I mostly teach courses on programming languages and compilers. I have written my own patterns for the programming languages course and use them to structure two units, on structurally recursive programming and closures for managing state. I've long dreamed of writing more patterns for the compiler course, which seems primed for the approach: So many of the structures we build when writing a compiler are the result of decades of experience, with tradeoffs that are often left implicit in textbooks and in the heads of experts.

I have used my own patterns as well as patterns from the literature when teaching other courses as well, back in the days when I taught three courses a semester. When I taught knowledge-based systems every year, I used patterns that I wrote to document the "generic task" approach to KBS in which my work lived. In our introductory OOP courses, I tended to use patterns from the literature both as a way to teach design techniques and trade-offs and as way to prepare students to graduate into a world where OO design patterns were part of the working vocabulary.

I like how patterns help me discuss code with students and how they help me evaluate solutions -- and communicate how I evaluate solutions.

That is more information than I shared yesterday... This was only a one and a half-hour workshop, and all of the participants had valuable experience to share.

I didn't take notes on what others shared or on the conversations that followed, but I did note a couple of things in general. Several people use a set of patterns or even a small pattern language to structure their entire course. I have done this at the unit level of a course, never at the scale of the course. My compiler course would be a great candidate here, but doing so would require that I write a lot of new material to augment and replace some of what I have now.

Likewise, a couple of participants have experimented with asking students to write patterns themselves. My teaching context is different than theirs, so I'm not sure this is something that can work for me in class. I've tried it a time or two with master's-level students, though, and it was educational for both the student and me.

I did grab all the links posted in the chat room so that I can follow up on them to read more later:

I also learned one new tech thing at the workshop: Padlet, a real-time sharing platform that runs in the browser. Michael used a padlet as the workshop's whiteboard, in lieu of Zoom's built-in tool (which I've still never used). The padlet made it quite straightforward to make and post cards containing text and other media. I may look into it as a classroom tool in the future.

Padlet also made it pretty easy to export the whiteboard in several formats. I grabbed a PDF copy of the workshop whiteboard for future reference. I considered linking to that PDF here, but I didn't get permission from the group first. So you'll just have to trust me.

Attending this workshop reminded me that I do not get to teach enough, at least not formal classroom teaching, and I miss it! As I told Michael in a post-workshop thank-you e-mail:

That was great fun, Michael. Thanks for organizing! I learned a few things, but mostly it is energizing simply to hear others talk about what they are doing.
"Energizing" really is a good word for what I experienced. I'm hyped to start working in more detail on my fall course, which starts in less than four weeks.

I'm also energized to look into the remaining PLoPourri events between now and the conference. (I really wish now that I had signed up for The Future of Education workshop in May, which was on "patterns for reshaping learning and the campus".) PLoP itself is on a Monday this year, in late October, so there is a good chance I can make it as well.

I told Michael that Teaching with Patterns would be an excellent regular event at PLoP. He seemed to like that thought. There is a lot of teaching experience out there to be shared -- and a lot of energy to be shared as well.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 03, 2022 9:00 AM

The Difference Between Medicine and Being a Doctor Is Like ...

Morgan Housel's recent piece on experts trying too hard includes a sentence that made me think of what I teach:

A doctor once told me the biggest thing they don't teach in medical school is the difference between medicine and being a doctor — medicine is a biological science, while being a doctor is often a social skill of managing expectations, understanding the insurance system, communicating effectively, and so on.

Most of the grads from our CS program go on to be software developers or to work in software-adjacent jobs like database administrator or web developer. Most of the rest work in system administration or networks. Few go on to be academic computer scientists. As Housel's doctor knows about medicine, there is a difference between academic computer science and being a software developer.

The good news is, I think the CS profs at many schools are aware of this, and the schools have developed computer science programs that at least nod at the difference in their coursework. The CS program at my school has a course on software engineering that is more practical than theoretical, and another short course that teaches practical tools such as version control, automated testing, and build tools, and skills such as writing documentation. All of our CS majors complete a large project course in which students work in teams to create a piece of software or a secure system, and the practical task of working as a team to deliver a piece of working software is a focus of the course. On top of those courses, I think most of our profs try to keep their courses real for students they know will want to apply what they learn in Algorithms, say, or Artificial Intelligence in their jobs as developers.

Even so, there is always a tension in classes between building foundational knowledge and building practical skills. I encounter this tension in both Programming Languages and Translation of Programming Languages. There are a lot of cool things we could learn about type theory, some of which might turn out to be quite useful in a forty-year career as a developer. But any time we devote to going deeper on type theory is time we can't devote to the concrete languages skills of a software developer, such as learning and creating APIs or the usability of programming languages.

So, we CS profs have to make design trade-offs in our courses as we try to balance the forces of students learning computer science and students becoming software developers. Fortunately, we learn a little bit about recognizing, assessing, and making trade-offs in our work both as computer scientists and as programmers. That doesn't make it easy, but at least we have some experience for thinking about the challenge.

The sentence quoted above reminds me that other disciplines face a similar challenge. Knowing computer science is different from being a software developer, or sysadmin. Knowing medicine is different from being a doctor. And, as Housel explains so well in his own work, knowing finance is different from being an investor, which is usually more about psychology and self-awareness than it is about net present value or alpha ratios. (The current stock market is a rather harsh reminder of that.)

Thanks to Housel for the prompt. The main theme of his piece — that experts makes mistakes that novices can't make, which leads to occasional unexpected outcomes — is the topic for another post.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

May 24, 2022 4:09 PM

Observing Students Learn to REPL in Dr. Racket

I recently ran across an old post by @joepolitz, Beginner REPL Stumbles, that records some of the ways he has observed students struggle as they learn to use IDEs with REPLs and files. As I mentioned on Twitter, it struck a chord with me even though I don't teach beginning programmers much these days. My tweets led to a short conversation that I'd like to record, and slightly expoand on, here.

I've noticed the first of the struggles in my junior-level Programming Languages class: students not knowing, or not taking seriously enough, the value of ctrl-up to replay and edit a previous interaction. If students cannot work effectively in the REPL, they will resort to typing all of their code in the definitions pane and repeatedly re-running it. This programming style misses out on the immense value of the REPL as a place to evolve code rapidly, with continual feedback, on the way to becoming a program.

As recommended in the post, I now demonstrate ctrl-up early in the course and note whenever I use it in a demo. If a student finds that their keyboard maps ctrl-up to another behavior, I show them how to define a shortcut in Preferences. This simple affordance can have an inordinate effect on the student's programming experience.

The other observations that Politz describes may be true for my students, too, and I just don't see them. My students are juniors and seniors who already have a year of experience in Python and perhaps a semester using Java. We aren't in the lab together regularly. I usually hear about their struggles with content when they ask questions, and when they do, they don't usually ask about process or tools. Sometimes, they will demo some interaction for me and I'll get to see an unexpected behavior in usage and help them, but that's rare.

(I do recall a student coming into my office once a few years ago and opening up a source file -- in Word. They said they had never gotten comfortable with Dr. Racket and that Word helped them make progress typing and editing code faster. We talked about ways to learn and practice Dr. Racket, but I don't think they ever switched.)

Having read about some of the usage patterns that Politz reports, I think I need to find ways to detect misunderstandings and difficulties with tools sooner. The REPL, and the ability to evolve code from interactions in the REPL into programs in the definitions pane, are powerful tools -- if one groks them and learns to use them effectively. As Politz notes, direct instruction is a necessary antidote to address these struggles. Direct instruction up front may also help my students get off to a better start with the tools.

There is so much room for improvement hidden inside assumptions that are baked into our current tools and languages. Observing learners can expose things we never think about, if we pay attention. I wonder what else I have been missing...

Fortunately, both Joe Politz and Shriram Krishnamurthi encountered my tweets. Krishnamurthi provided helpful context, noting that the PLT Scheme team noticed many of these issues in the early days of Dr. Scheme. They noticed others while running teacher training sessions for @Bootstrapworld. In both cases, instructors were in the lab with learners while they used the tools. In the crush to fix more pressing problems, the interaction issues went unaddressed. In my experience, they are also subtle and hard to appreciate fully without repeated exposure.

Politz provided a link to a workshop paper on Repartee, a tool that explicitly integrates interactive programming and whole-program editing. Very cool. As Krishnamurthi noted to close the conversation, Repartee demonstrates that we may be able to do better than simply teach students to use a REPL more effectively. Perhaps we can make better tools.

I've been learning a lot about CS education research the last few years. It is so much more than the sort of surface-level observations and uncontrolled experiments I saw, and made, at the beginning of my career. This kind of research demands a more serious commitment to science but offers the potential of real improvement in return for the effort. I'm glad to know CS ed researchers are making that commitment. I hope to help where I can.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 16, 2022 2:32 PM

More Fun, Better Code: A Bug Fix for my Pair-as-Set Implementation

In my previous post, I wrote joyously of a fun bit of programming: implementing ordered pairs using sets.

Alas, there was a bug in my solution. Thanks to Carl Friedrich Bolz-Tereick for finding it so quickly:

Heh, this is fun, great post! I wonder what happens though if a = b? Then the set is {{a}}. first should still work, but would second fail, because the set difference returns the empty set?

Carl Friedrich had found a hole in my small set of tests, which sufficed for my other implementations because the data structures I used separate cells for the first and second parts of the pair. A set will do that only if the first and second parts are different!

Obviously, enforcing a != b is unacceptable. My first code thought was to guard second()'s behavior:

    if my formula finds a result
       then return that result
       else return (first p)

This feels like a programming hack. Furthermore, it results in an impure implementation: it uses a boolean value and an if expression. But it does seem to work. That would have to be good enough unless I could find a better solution.

Perhaps I could use a different representation of the pair. Helpfully, Carl Friedrich followed up with pointers to several blog posts by Mark Dominus from last November that looked at the set encoding of ordered pairs in some depth. One of those posts taught me about another possibility: Wiener pairs. The idea is this:

    (a,b) = { {{a},∅}, {{b}} }

Dominus shows how Wiener pairs solve the a == b edge case in Kuratowski pairs, which makes it a viable alternative.

Would I ever have stumbled upon this representation, as I did onto the Kuratowski pairs? I don't think so. The representation is more complex, with higher-order sets. Even worse for my implementation, the formulas for first() and second() are much more complex. That makes it a lot less attractive to me, even if I never want to show this code to my students. I myself like to have a solid feel for the code I write, and this is still at the fringe of my understanding.

Fortunately, as I read more of Dominus's posts, I found there might be a way to save my Kuratowski-style solution. It turns out that the if expression I wrote above parallels the set logic used to implement a second() accessor for Kuratowski pairs: a choice between the set that works for a != b pairs and a fallback to a one-set solution.

From this Dominus post, we see the correct set expression for second() is:

the correct set expression for the second() function

... which can be simplified to:

an expression for the second() function simplified to a logical statement

The latter expression is useful for reasoning about second(), but it doesn't help me implement the function using set operations. I finally figured out what the former equation was saying: if (∪ p) is same as (∩ p), then the answer comes from (∩ p); otherwise, it comes from their difference.

I realized then that I could not write this function purely in terms of set operations. The computation requires the logic used to make this choice. I don't know where the boundary lies between pure set theory and the logic in the set comprehension, but a choice based on a set-empty? test is essential.

In any case, I think I can implement the my understanding of the set expression for second() as follows. If we define union-minus-intersection as:

    (set-minus (apply set-union aSet)
               (apply set-intersect aSet))
then:
    (second p) = (if (set-empty? union-minus-intersection)
                     (set-elem (apply set-intersect aSet))
                     (set-elem union-minus-intersection))

The then clause is the same as the body of first(), which must be true: if the union of the sets is the same as their intersection, then the answer comes from the interesection, just as first()'s answer does.

It turns out that this solution essentially implements my first code idea above: if my formula from the previous blog entry finds a result, then return that result. Otherwise, return first(p). The circle closes.

Success! Or, I should: Success!(?) After having a bug in my original solution, I need to stay humble. But I think this does it. It passes all of my original tests as well as tests for a == b, which is the main edge case in all the discussions I have now read about set implementations of pairs. Here is a link to the final code file, if you'd like to check it out. I include the two simple test scenarios, for both a == b and a == b, as Rackunit tests.

So, all in all, this was a very good week. I got to have some fun programming, twice. I learned some set theory, with help from a colleague on Twitter. I was also reacquainted with Mark Dominus's blog, the RSS feed for which I had somehow lost track of. I am glad to have it back in my newsreader.

This experience highlights one of the selfish reasons I like for students to ask questions in class. Sometimes, they lead to learning and enjoyment for me as well. (Thanks, Henry!) It also highlights one of the reasons I like Twitter. The friends we make there participate in our learning and enjoyment. (Thanks, Carl Friedrich!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

April 13, 2022 2:27 PM

Programming Fun: Implementing Pairs Using Sets

Yesterday's session of my course was a quiz preceded by a fun little introduction to data abstraction. As part of that, I use a common exercise. First, I define a simple API for ordered pairs: (make-pair a b), (first p), and (second p). Then I ask students to brainstorm all the ways that they could implement the API in Racket.

They usually have no trouble thinking of the data structures they've been using all semester. Pairs, sure. Lists, yes. Does Racket have a hash type? Yes. I remind my students about vectors, which we have not used much this semester. Most of them haven't programmed in a language with records yet, so I tell them about structs in C and show them Racket's struct type. This example has the added benefit of seeing that Racket generates constructor and accessor functions that do the work for us.

The big payoff, of course, comes when I ask them about using a Racket function -- the data type we have used most this semester -- to implement a pair. I demonstrate three possibilities: a selector function (which also uses booleans), message passaging (which also uses symbols), and pure functions. Most students look on these solutions, especially the one using pure functions, with amazement. I could see a couple of them actively puzzling through the code. That is one measure of success for the session.

This year, one student asked if we could use sets to implement a pair. Yes, of course, but I had never tried that... Challenge accepted!

While the students took their quiz, I set myself a problem.

The first question is how to represent the pair. (make-pair a b) could return {a, b}, but with no order we can't know which is first and which is second. My students and I had just looked at a selector function, (if arg a b), which can distinguish between the first item and the second. Maybe my pair-as-set could know which item is first, and we could figure out the second is the other.

So my next idea was (make-pair a b){{a, b}, a}. Now the pair knows both items, as well as which comes first, but I have a type problem. I can't apply set operations to all members of the set and will have to test the type of every item I pull from the set. This led me to try (make-pair a b){{a}, {a, b}}. What now?

My first attempt at writing (first p) and (second p) started to blow up into a lot of code. Our set implementation provides a way to iterate over the members of a set using accessors named set-elem and set-rest. In fine imperative fashion, I used them to start whacking out a solution. But the growing complexity of the code made clear to me that I was programming around sets, but not with sets.

When teaching functional programming style this semester, I have been trying a new tactic. Whenever we face a problem, I ask the students, "What function can help us here?" I decided to practice what I was preaching.

Given p = {{a}, {a, b}}, what function can help me compute the first member of the pair, a? Intersection! I just need to retrieve the singleton member of ∩ p:

    (first p) = (set-elem (apply set-intersect p))

What function can help me compute the second member of the pair, b? This is a bit trickier... I can use set subtraction, {a, b} - {a}, but I don't know which element in my set is {a, b} and which is {a}. Serendipitously, I just solved the latter subproblem with intersection.

Which function can help me compute {a, b} from p? Union! Now I have a clear path forward: (∪ p) – (∩ p):

    (second p) = (set-elem
                   (set-minus (apply set-union p)
                              (apply set-intersect p)))

I implemented these three functions, ran the tests I'd been using for my other implementations... and they passed. I'm not a set theorist, so I was not prepared to prove my solution correct, but the tests going all green gave me confidence in my new implementation.

Last night, I glanced at the web to see what other programmers had done for this problem. I didn't run across any code, but I did find a question and answer on the mathematics StackExchange that discusses the set theory behind the problem. The answer refers to something called "the Kuratowski definition", which resembles my solution. Actually, I should say that my solution resembles this definition, which is an established part of set theory. From the StackExchange page, I learned that there are other ways to express a pair as a set, though the alternatives look much more complex. I didn't know the set theory but stumbled onto something that works.

My solution is short and elegant. Admittedly, I stumbled into it, but at least I was using the tools and thinking patterns that I've been encouraging my students to use.

I'll admit that I am a little proud of myself. Please indulge me! Department heads don't get to solve interesting problems like this during most of the workday. Besides, in administration, "elegant" and "clever" solutions usually backfire.

I'm guessing that most of my students will be unimpressed, though they can enjoy my pleasure vicariously. Perhaps the ones who were actively puzzling through the pure-function code will appreciate this solution, too. And I hope that all of my students can at least see that the tools and skills they are learning will serve them well when they run into a problem that looks daunting.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

March 16, 2022 2:52 PM

A Cool Project For Manipulating Images, Modulo Wordle

Wordle has been the popular game du jour for a while now. Whenever this sort of thing happens, CS profs think, "How can I turn this into an assignment?" I've seen a lot of discussion of ways to create a Wordle knock-off assignment for CS 1. None of this conversation has interested me much. First, I rarely teach CS 1 these days, and a game such as Wordle doesn't fit into the courses I do teach right now. Second, there's a certain element of Yogi Berra wisdom driving me away from Wordle: "It's so popular; no one goes there any more."

Most important, I had my students implementing Mastermind in my OO course back in the 2000-2005 era. I've already had most of the fun I can have with this game as an assignment. And it was a popular one with students, though challenging. The model of the game itself presents challenges for representation and algorithm, and the UI has the sort of graphic panache that tends to engage students. I remember receiving email from a student who had transferred to another university, asking me if I would still help her debug and improve her code for the assignment; she wanted to show it to her family. (Of course I helped!)

However I did see something recently that made me think of a cool assignment: 5x6 Art, a Twitter account that converts paintings and other images into minimalist 5 block-by-6 block abstract grids. The connection to Wordle is in the grid, but the color palette is much richer. Like any good CS prof, I immediately asked myself, "How can I turn this into an assignment?"

a screen capture of 5x6art from Twitter

I taught our intro course using the media computation approach popularized by Mark Guzdial back in 2006. In that course, my students processed images such as the artwork displayed above. They would have enjoyed this challenge! There are so many cool ways to think about creating a 5x6 abstraction of an input image. We could define a fixed palette of n colors, then map the corresponding region of the image onto a single block. But how to choose the color?

We could compute the average pixel value of the range and then choose the color in the palette closest to that value. Or we could create neighborhoods of different sizes around all of the palette colors so that we favor some colors for the grid over others. What if we simply compute the average pixel for the region and use that as the grid color? That would give us a much larger but much less distinct set of possible colors. I suspect that this would produce less striking outputs, but I'd really like to try the experiment and see the grids it produces.

What if we allowed ourselves a bigger grid, for more granularity in our output images? There are probably many other dimensions we could play with. The more artistically inclined among you can surely think of interesting twists I haven't found yet.

That's some media computation goodness. I may assign myself to teach intro again sometime soon just so that I can develop and use this assignment. Or stop doing other work for a day or two and try it out on my own right now.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 03, 2022 2:22 PM

Knowing Context Gives You Power, Both To Choose And To Make

We are at the point in my programming languages course where my students have learned a little Racket, a little functional programming, and a little recursive programming over inductive datatypes. Even though I've been able to connect many of the ideas we've studied to programming tasks out in the world that they care about themselves, a couple of students have asked, "Why are we doing this again?"

This is a natural question, and one I'm asked every time I teach this course. My students think that they will be heading out into the world to build software in Java or Python or C, and the ideas we've seen thus far seem pretty far removed from the world they think they will live in.

These paragraphs from near the end of Chelsea Troy's 3-part essay on API design do a nice job of capturing one part of the answer I give my students:

This is just one example to make a broader point: it is worthwhile for us as technologists to cultivate knowledge of the context surrounding our tools so we can make informed decisions about when and how to use them. In this case, we've managed to break down some web request protocols and build their pieces back up into a hybrid version that suits our needs.

When we understand where technology comes from, we can more effectively engage with its strengths, limitations, and use cases. We can also pick and choose the elements from that technology that we would like to carry into the solutions we build today.

The languages we use were designed and developed in a particular context. Knowing that context gives us multiple powers. One power is the ability to make informed decisions about the languages -- and language features -- we choose in any given situation. Another is the ability to make new languages, new features, and new combinations of features that solve the problem we face today in the way that works best in our current context.

Not knowing context limits us to our own experience. Troy does a wonderful job of demonstrating this using the history of web API practice. I hope my course can help students choose tools and write code more effectively when they encounter new languages and programming styles.

Computing changes. My students don't really know what world they will be working in in five years, or ten, or thirty. Context is a meta-tool that will serve them well.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 13, 2022 12:32 PM

A Morning with Billy Collins

It's been a while since I read a non-technical article and made as many notes as I did this morning on this Paris Review interview with Billy Collins. Collins was poet laureate of the U.S. in the early 2000s. I recall reading his collection, Sailing Alone Around the Room, at PLoP in 2002 or 2003. Walking the grounds at Allerton with a poem in your mind changes one's eyes and hears. Had I been blogging by then, I probably would have commented on the experience, and maybe one or two of the poems, in a post.

As I read this interview, I encountered a dozen or so passages that made me think about things I do, things I've thought, and even things I've never thought. Here are a few.

I'd like to get something straightened out at the beginning: I write with a Uni-Ball Onyx Micropoint on nine-by-seven bound notebooks made by a Canadian company called Blueline. After I do a few drafts, I type up the poem on a Macintosh G3 and then send it out the door.

Uni-Ball Micropoint pens are my preferred writing implement as well, though I don't write enough on paper any more to make buying a particular pen much worth the effort. Unfortunately, just yesterday my last Uni-Ball Micro wrote its last line. Will I order more? It's a race between preference and sloth.

I type up most of the things I write these days on a 2015-era MacBook Pro, often connected to a Magic Keyboard. With the advent of the M1 MacBook Pros, I'm tempted to buy a new laptop, but this one serves me so well... I am nothing if not loyal.

The pen is an instrument of discovery rather than just a recording implement. If you write a letter of resignation or something with an agenda, you're simply using a pen to record what you have thought out. In a poem, the pen is more like a flashlight, a Geiger counter, or one of those metal detectors that people walk around beaches with. You're trying to discover something that you don't know exists, maybe something of value.

Programming may be like writing in many ways, but the search for something to say isn't usually one of them. Most of us sit down to write a program to do something, not to discover some unexpected outcome. However, while I may know what my program will do when I get done, I don't always know what that program will look like, or how it will accomplish its task. This state of uncertainty probably accounts for my preference in programming languages over the years. Smalltalk, Ruby, and Racket have always felt more like flashlights or Geiger counters than tape recorders. They help me find the program I need more readily than Java or C or Python.

I love William Matthews's idea--he says that revision is not cleaning up after the party; revision is the party!

Refactoring is not cleaning up after the party; refactoring is the party! Yes.

... nothing precedes a poem but silence, and nothing follows a poem but silence. A poem is an interruption of silence, whereas prose is a continuation of noise.

I don't know why this passage grabbed me. Perhaps it's just the imagery of the phrases "interruption of silence" and "continuation of noise". I won't be surprised if my subconscious connects this to programming somehow, but I ought to be suspicious of the imposition. Our brains love to make connections.

She's this girl in high school who broke my heart, and I'm hoping that she'll read my poems one day and feel bad about what she did.

This is the sort of sentence I'm a sucker for, but it has no real connection to my life. Though high school was a weird and wonderful time for me, as it was for so many, I don't think anything I've ever done since has been motivated in this way. Collins actually goes on to say the same thing about his own work. Readers are people with no vested interest. We have to engage them.

Another example of that is my interest in bridge columns. I don't play bridge. I have no idea how to play bridge, but I always read Alan Truscott's bridge column in the Times. I advise students to do the same unless, of course, they play bridge. You find language like, South won with dummy's ace, cashed the club ace and ruffed a diamond. There's always drama to it: Her thirteen imps failed by a trick. There's obviously lots at stake, but I have no idea what he's talking about. It's pure language. It's a jargon I'm exterior to, and I love reading it because I don't know what the context is, and I'm just enjoying the language and the drama, almost like when you hear two people arguing through a wall, and the wall is thick enough so you can't make out what they're saying, though you can follow the tone.

I feel seen. Back when we took the local daily paper, I always read the bridge column by Charles Goren, which ran on the page with the crossword, crypto cipher, and other puzzles. I've never played bridge; most of what I know about the game comes from reading Matthew Ginsberg's papers about building AI programs to bid and play. Like Collins, I think I was merely enjoying sound of the language, a jargon that sounds serious and silly at the same time.

Yeats summarizes this whole thing in "Adam's Curse" when he writes: "A line will take us hours maybe, / Yet if it does not seem a moment's thought / Our stitching and unstitching has been naught."

I'm not a poet, and my unit of writing is rarely the line, but I know a feeling something like this in writing lecture notes for my students. Most of the worst writing consists of paragraphs and sections I have not spent enough time on. Most of the best sounds natural, a clean distillation of deep understanding. But those paragraphs and sections are the result of years of evolution. That's the time scale on which some of my courses grow, because no course ever gets my full attention in any semester.

When I finish a set of notes, I usually feel like the stitching and unstitching have not yet reached their desired end. Some of the text "seems a moment's thought", but much is still uneven or awkward. Whatever the state of the notes, though, I have move on to the next task: grading a homework assignment, preparing the next class session, or -- worst of all -- performing the administrivia that props up the modern university. More evolution awaits.

~~~~

This was a good read for a Sunday morning on the exercise bike, well recommended. The line on revision alone was worth the time; I expect it will be a stock tool in my arsenal for years to come.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Software Development, Teaching and Learning

January 13, 2022 2:02 PM

A Quick Follow-Up on What Next

My recent post on what language or tool I should dive into next got some engagement on Twitter, with many helpful suggestions. Thank you all! So I figure I should post a quick update to report what I'm thinking at this point.

In that post, I mentioned JavaScript and Pharo by name, though I was open to other ideas. Many folks pointed out the practical value of JavaScript, especially in a context where many of my students know and use it. Others offered lots of good ideas in the Smalltalk vein, both Pharo and several lighter-weight Squeaks. A couple of folks recommended Glamorous Toolkit (GToolkit), from @feenkcom, which I had not heard of before.

I took to mind several of the suggestions that commenters made about the how to think about making the decision. For example, there is more overhead to studying Pharo and GToolkit than JavaScript or one of the lighter-weight Squeaks. Choosing one of the latter would make it easier to tinker. I think some of these comments had students in mind, but they are true even for my own study during the academic semester. Once I get into a term (my course begins one week from today), my attention gets pulled in many directions for fifteen or sixteen weeks. Being able to quickly switch contexts when jumping into a coding session means that I can jump more often and more productively.

Also, as Glenn Vanderburg pointed out, JavaScript and Pharo aren't likely to teach me much new. I have a lot of background with Smalltalk and, in many ways, JavaScript is just another language. The main benefit of working with either would be practical, not educational.

GToolkit might teach me something, though. As I looked into GToolkit, it became more tempting. The code is Smalltalk, because it is implemented in Pharo. But the project has a more ambitious vision of software that is "moldable": easier to understand, easier to figure out. GToolkit builds on Smalltalk's image in the direction of a computational notebook, which is an idea I've long wanted to explore. (I feel a little guilty that I haven't look more into the work that David Schmüdde has done developing a notebook in Clojure.) GToolkit sounds like a great way for me to open several doors at once and learn something new. To do it justice, though, I need more time and focus to get started.

So I have decided on a two-pronged approach. I will explore JavaScript during the spring semester. This will teach me more about a language and ecosystem that are central to many of my students' lives. There is little overhead to picking it up and playing with it, even during the busiest weeks of the term. I can have a little fun and maybe make some connections to my programming languages course along the way. Then for summer, I will turn my attention to GToolkit, and perhaps a bigger research agenda.

I started playing with JavaScript on Tuesday. Having just read a blog post on scripting to compute letter frequencies in Perl, I implemented some of the same ideas in JavaScript. For the most part, I worked just as my students do: relying on vague memories of syntax and semantics and, when that failed, searching about for examples online.

A couple of hours working like this refreshed my memory on the syntax I knew from before and introduced me to some features that were new to me. It took a few minutes to re-internalize the need for those pesky semicolons at the end of every line... The resulting code is not much more verbose than Perl. I drifted pretty naturally to using functional programming style, as you might imagine, and it felt reasonably comfortable. Pretty soon I was thinking more about the tradeoff between clarity and efficiency in my code than about syntax, which is a good sign. I did run into one of JavaScript's gotchas: I used for...in twice instead of for...of and was surprised by the resulting behavior. Like any programmer, I banged my head on wall for a few minutes and then recovered. But I have to admit that I had fun. I like to program.

I'm not sure what I will write next, or when I will move into the browser and play with interface elements. Suggestions are welcome!

I am pretty sure, though, that I'll start writing unit tests soon. I used SUnit briefly and have a lot of experience with JUnit. Is JSUnit a thing?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

January 06, 2022 2:47 PM

A Fresh Encounter with Hexapawn

When I was in high school, the sponsor of our our Math Club, Mr. Harpring, liked to give books as prizes and honors for various achievements. One time, he gave me Women in Mathematics, by Lynn Osen. It introduced me to Émilie du Châtelet, Sophie Germain, Emmy Noether, and a number of other accomplished women in the field. I also learned about some cool math ideas.

the initial position of a game of a Hexapawn

Another time, I received The Unexpected Hanging and Other Mathematical Diversions, a collection of Martin Gardner's columns from Scientific American. One of the chapters was about Hexapawn, a simple game played with chess pawns on a 3x3 board. The chapter described an analog computer that learned how to play a perfect game of Hexapawn. I was amazed.

I played a lot of chess in high school and was already interested in computer chess programs. Now I began to wonder what it would be like to write a program that could learn to play chess... I suspect that Gardner's chapter planted one of the seeds that grew into my study of computer science in college. (It took a couple of years, though. From the time I was eight years old, I had wanted to be an architect, and that's where my mind was focused.)

As I wrote those words, it occurred to me that I may have written about the Gardner book before. Indeed I have, in a 2013 post on building the Hexapawn machine. Some experiences stay with you.

They also intersect with the rest of the world. This week, I read Jeff Atwood's recent post about his project to bring the 1973 book BASIC Computer Games into the 21st century. This book contains the source code of BASIC programs for 101 simple games. The earliest editions of this book used a version of BASIC before it included the GOSUB command, so there are no subroutines in any of the programs! Atwood started the project as a way to bring the programs in this book to a new audience, using modern languages and idioms.

You may wonder why he and other programmers would feel so fondly about BASIC Computer Games to reimplement its programs in Java or Ruby. They feel about these books the way I felt about The Unexpected Hanging. Books were the Github of the day, only in analog form. Many people in the 1970s and 1980s got their start in computing by typing these programs, character for character, into their computers.

I was not one of those people. My only access to a computer was in the high school, where I took a BASIC programming class my junior year. I had never seen a book like BASIC Computer Games, so I wrote all my programs from scratch. As mentioned in an old OOPSLA post from 2005, the first program I wrote out of passion was a program to implement a ratings system for our chess club. Elo ratings were great application for a math student and beginning programmer.

Anyway, I went to the project's Github site to check out what was available and maybe play a game or two. And there it was: Hexapawn! Someone has already completed the port to Python, so I grabbed it and played a few games. The text interface is right out of 1973, as promised. But the program learns, also as promised, and eventually plays a perfect game. Playing it brings back memories of playing my matchbox computer from high school. I wonder now if I should write my own program that learns Hexapawn faster, hook it up with the program from the book, and let them duke it out.

Atwood's post brought to mind pleasant memories at a time when pleasant memories are especially welcome. So many experiences create who we are today, yet some seem to have made an outsized contribution. Learning BASIC and reading Martin Gardner's articles are two of those for me.

Reading that blog post and thinking about Hexapawn also reminded me of Mr. Harpring and the effect he had on me as a student of math and as a person. The effects of a teacher in high school or grade school can be subtle and easy to lose track of over time. But they can also be real and deep, and easy not to appreciate fully when we are living them. I wish I could thank Mr. Harpring again for the books he gave me, and for the gift of seeing a teacher love math.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

January 04, 2022 2:54 PM

Which Language Next?

I've never been one to write year-end retrospectives on my blog, or prospective posts about my plans for the new year. That won't change this year, this post notwithstanding.

I will say that 2021 was a weird year for me, as it was for many people. One positive was purchasing a 29" ultra-wide monitor for work at home, seen in this post from my Strange Loop series. Programming at home has been more fun since the purchase, as have been lecture prep, data-focused online meetings, and just about everything. The only downside is that it's in my basement office, which hides me away. When I want to work upstairs to be with family, it's back to the 15" laptop screen. First-world problems.

Looking forward, I'm feeling a little itchy. I'll be teaching programming languages again this spring and plan to inject some new ideas, but the real itch is: I am looking for a new project to work on, and a new language to study. This doesn't have to be a new language, just one that one I haven't gone deep on before. I have considered a few, including Swift, but right now I am thinking of Pharo and JavaScript.

Thinking about mastering JavaScript in 2022 feels like going backward. It's old, as programming languages go, and has been a dominant force in the computing world for well over a decade. But it's also the most common language many of my students know that I have never gone deep on. There is great value in studying languages for their novel ideas and academic interest, but there is also value in having expertise with a language and toolchain that my students already care about. Besides, I've really enjoyed reading about work on JIT compilation of JavaScript over the years, and it's been a long time since I wrote code in a prototype-based OO language. Maybe it's time to build something useful in JavaScript.

Studying Pharo would be going backward for me in a different way. Smalltalk always beckons. Long-time followers of this blog have read many posts about my formative experiences with Smalltalk. But it has been twenty years since I lived in an image every day. Pharo is a modern Smalltalk with a big class library and a goal of being suitable for mission-critical systems. I don't need much of a tug; Smalltalk always beckons.

My current quandary brings to mind a dinner at a Dagstuhl seminar in the summer of 2019 (*). It's been a while now, so I hope my memory doesn't fail me too badly. Mark Guzdial was talking about a Pharo MOOC he had recently completed and how he was thinking of using the language to implement a piece of software for his new research group at Michigan, or perhaps a class he was teaching in the fall. If I recall correctly, he was torn between using Pharo and... JavaScript. He laid out some of the pros and cons of each, with JavaScript winning out on several pragmatic criteria, but his heart was clearly with Pharo. Shriram Krishnamurthi gently encouraged Mark to follow his heart: programming should be joyful, and programming languages allow us to build in languages that give us enjoyment. I seconded the (e)motion.

And here I sit mulling a similar choice.

Maybe I can make this a two-language year.

~~~~~

(*) Argh! I never properly blogged about about this seminar, on the interplay between notional machines and programming language semantics, or the experience of visiting Europe for the first time. I did write one post that mentioned Dagstuhl, Paris, and Montenegro, with an expressed hope to write more. Anything I write now will be filtered through two and a half years of fuzzy memory, but it may be worth the time to get it down in writing before it's too late to remember anything useful. In the meantime: both the seminar and the vacation were wonderful! If you are ever invited to participate in a Dagstuhl seminar, consider accepting.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 31, 2021 12:13 PM

An Experience Rewriting a Piece of Code as as Teaching Tool

Over the last four or five offerings of my compiler course, I have been making progress in how I teach code generation, with teams becoming increasingly successful at producing a working code generator. In the 2019 offering, students asked a few new questions about some low-level mechanical issues in the run-time system. So I whipped up a simple one the night before class, both to refamiliarize myself with the issues and to serve as a potential example. It was not a great piece of software, but it was good enough for a quick in-class demo and as a seed for discussion.

Jump ahead to 2021. As I mentioned in my previous post, this fall's group had a lot more questions about assembly language, the run-time stack, activation records, and the like. When I pulled out my demo run-time system from last time, I found that it didn't help them as much as it had the previous group. The messiness of the code got in the way. Students couldn't see the bigger picture from the explanatory comments, and the code itself seemed opaque to them.

Working with a couple of students in particular, I began to refine the code. First, I commented the higher-level structure of generator more clearly. I then used those comments to reorganize the code bit, with the goal of improving the instructional presentation rather than the code's efficiency or compactness. I chose to leave some comments in rather than to factor out functions, because the students found the linear presentation easier to follow.

Finally, I refined some sections of the code and rewrote others entirely, to make them clearer. At this point, I did extract a helper function or two in an attempty not to obscure the story the program was telling with low-level details.

I worked through two iterations of this process: comment, reorganize, rewrite. At the end, I had a piece of code that is pretty good, and one that is on the student's path to a full code generator.

Of course, I could have designed my software up front and proceeded more carefully as I wrote this code. I mean, professionals and academics understand compiler construction pretty well. The result might well have been a better example of what the run-time generator should look like when students are done.

But I don't think that would have been as helpful to most members of my class. This process was much more like how my students program, and how many of us program, frankly, when we are first learning a new domain. Following this process, and working in direct response to questions students had as we discussed the code, gave me insights into some of the challenges they encounter in my course, including tracking register usage and seeing how the calling and return sequences interact. I teach these ideas in class, but my students were seeing that material as too abstract. They couldn't make the leap to code quite as easily as I had hoped; they were working concretely from the start.

In the end, I ended up with both a piece of code I like and a better handle on how my students approach the compiler project. In terms of outcomes assessment, this experience gives me some concrete ideas for improving the prerequisite courses students take before my course, such as computer organization. More immediately, it helps me improve the instruction in my own course. I have some ideas about how I can reorganize my code generator units and how I might simplify some of my pedagogical material. This may lead to a bigger redesign of the course in a coming semester.

I must admit: I had a lot of fun writing this code -- and revising and improving it! One bit of good news from the experience is that the advice I give in class is pretty good. If they follow my practical suggestions for writing their code, they can be successful. What needs improvement now is finding ways for students to have the relevant bits of advice at hand when they need them. Concrete advice that gets separated from concrete practice tends to get lost in the wind.

Finally, this experience reminded me first hand that the compiler project is indeed a challenge. It's fun, but it's a challenge, especially for undergrads attempting to write their first large piece of software as part of a team. There may be ways I can better help them to succeed.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 28, 2021 5:31 PM

Thinking Back on My Compiler Course This Fall

Well, fall semester really got away from me quickly. It seems not long ago that I wrote of launching the course with a renewed mindset of "compilers for the masses, not compilers for compiler people". I'm not sure how well that went this time, as many students came into the course with less understanding of the underlying machine model and assembly language than ever before. As a result, many of them ended up stressing over low-level implementation details while shoring up that knowledge than thinking about some of the higher-level software engineering ideas. I spent more time this semester working with more teams to help them understand parsing rules, semantic actions, and activation records than in anytime I can remember.

I suspect that the students' programming maturity and state of knowledge at the start of the course are in large part a result of experiencing the previous two and a half semesters under the damper of the pandemic. Some classes were online, others were hybrid, and all were affected by mitigation efforts, doubt, and stress. Students and professors alike faced these effects, me included, and while everyone has been doing the best they could under the circumstances, sometimes the best we can do comes up a little short.

At the beginning of the course, I wrote about a particular uncertainty raised by the preceding pandemic semesters: how isolation and the interruption of regular life had reduced the chances for students to make friends in the major and to build up personal and professional connections with their classmates. I underestimated, I think, the effect that the previous year and a half would have on learning outcomes in our courses.

The effect on project teams themselves turned out to be a mixed bag. Three of the five teams worked pretty well together, even if one of the teammates was unable to contribute equally to the project. That's pretty typical. Two other teams encountered more serious difficulties working together effectively. Difficulties derailed one project that got off to an outstanding start, and the second ended up being a one-person show (a very impressive one-person show, in fact). In retrospect, many of these challenges can be traced back to problems some students had with content: they found themselves falling farther behind their teammates and responded by withdrawing from group work. The result is a bad experience for those still plugging along.

That's perhaps too many words about the difficulties. Several teams seemed to have pretty typical experiences working one another, even though they didn't really know each other before working together.

The combination of some students struggling with course content and some struggling with collaboration led to mixed bag of results. Two teams produced working compilers that handled essentially all language features correctly, or nearly so. That's pretty typical for a five-team semester. One team produced an incomplete system, but one they could be proud of after working pretty hard the entire semester. That's typical, too.

Two teams produced systems without code generators beyond a rudimentary run-time system. That's a bit unusual. These teams were disappointed because they had set much higher goals for themselves. Many of these students were taking heavy course and research loads and, unfortunately, all that work eventually overwhelmed them. I think I felt as bad for them as they did, knowing what they might have accomplished with a more forgiving schedule. I do hope they found some value in the course and will be able to look back on the experience as worthwhile. They learned a lot about working on a big project, and perhaps about themselves.

What about me? A few weeks into the course, I declared that I was programming like a student again, trying to implement the full compiler project I set before my students. Like many of my students, I accomplished some of my goals and fell short when outside obstacles got in the way. One the front end, my scanner is in great shape, while my parser is correct but in need of some refactoring. At that point in the semester, I got busy both with department duties and with working one on one with the teams, and my productivity dropped off.

I did implement a solid run-time system, one I am rather happy with. My work on it came directly out of answering students' questions about code generation and working with them to investigate and debug their programs. I'll have more to say about my run-time system in the next post.

So, my latest compiler course is in the books. All in all, my students and I did about as well as we could under the circumstances. There is still great magic in watching a team's compiler generate an executable, then running that executable on an input that produces tens of thousands of activation records and executes several million lines of assembly. The best we can do is often quite good enough.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 28, 2021 10:35 AM

Students Need to Develop Microskills, Too

Yesterday someone retweeted this message from Hillel Wayne into my timeline:

A surprising and undervalued aspect of mastery comes from narrow-scope "microskills". In software, these would be things like

- Python list comprehensions
- Grep flags
- Debugging specific error messages
- Using a window manager

We can, and should, do targeted training.

Later in the short thread, Hillel says something that brought my students to mind:

In other words, the high level thinking is the difference between a task being possible and impossible, but the microskills are the difference between a task being 20 minutes and six hours. That's why it can take us days to find the right one-liner.

Some students love to program and look for every opportunity to write code. These students develop a satchelful of microskills, usually far behind what we could every teach them or expose them to in class. They tend to do fine on the programming assignments we give in class.

Other students complain that a programming assignment I gave them, intended as a one-hour practice session, took them eight or ten hours. My first reaction is usually to encourage them never to spin their wheels that long without asking me a question. I want them to develop grit, but usually that kind of wheel spinning is a sign that they may be missing a key idea or piece of information. I'd like to help them get moving sooner.

There is another reason, though, that many of these students spin their wheels. For a variety of reasons, they program only when they are required by a class. They are ambivalent about programming, either because they don't find it as interesting as their peers or because they don't enjoy the grind of working in the trenches yet. I say "yet" because one of the ways we all come to enjoy the grind is... to grind away! That's how we develop the microskills Hillel is talking about, which turn a six-hour task into a 20-minute task, or a disappointing one-hour experience with a homework problem into a five-minute joy.

Students who have yet to experience the power of microskills are prone to underestimate their value. That makes it less enjoyable to practice programming, which makes it hard to develop new skills. It's a self-reinforcing loop, and not the kind we want to encourage in our students.

Even after all these years in the classroom, I still struggle to find ways to help my students practice programming skills in a steady, reliable way. Designing engaging problems to work on helps. So does letting students choose the problems they work on, which works better in some courses than others. Ultimately, though, I think what works best is to develop as much of a personal relationship with each student as possible. This creates a condition in which they are more inclined to ask for help when they need it and to trust suggestions from that sound a little crazy to the beginner's mind.

I am teaching my compiler development course this semester, which means that I am working with students near the end of their time with us, whose habits are deeply ingrained from previous courses. The ones who don't already possess a few of the microskills they need are struggling as the task of writing a parser or semantic checker or code generator stretches out like an endless desert before them.

Next semester, I teach my programming languages course and have the joy of introducing my students to Racket and functional programming. This essay me already thinking of how I can help my students develop some of the microskills they will want and need in the course and beyond. Perhaps a more explicit focus early on the use Dr. Racket to create, run, test, and debug code can set them up for a more enjoyable experience later in the course -- and help put them on a virtuous self-reinforcing loop developing skills and using them to enjoy the next bit of learning they do.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 28, 2021 3:52 PM

A Few Quotes from "The Triggering Town"

A couple of weeks back, I saw an article in which Malcom Gladwell noted that he did not know The Triggering Town, a slim book of essays by poet Richard Hugo. I was fortunate to hear about Hugo many years ago from software guru Richard Gabriel, who is also a working poet. It had been fifteen years or more since I'd read The Triggering Town, so I stopped into the library on my way home one day and picked it up. I enjoyed it the second time around as much as the first.

I frequently make notes of passages to save. Here are five from this reading.

Actually, the hard work you do on one poem is put in on all poems. The hard work on the first poem is responsible for the sudden ease of the second. If you just sit around waiting for the easy ones, nothing will come. Get to work.

That advice works for budding software developers, too.

Emotional honesty is a rare thing in the academic world or anywhere else for that matter, and nothing is more prized by good students.

Emotion plays a much smaller role in programming than in writing poetry. Teaching, though, is deeply personal, even in a technical discipline. All students value emotional honesty, and profs who struggle to be open usually struggle making connections to their students.

Side note: Teachers, like policemen, firemen, and service personnel, should be able to retire after twenty years with full pension. Our risks may be different, but they are real. In twenty years most teachers have given their best.

This is a teacher speaking, so take the recommendation with caution. But more than twenty years into this game, I know exactly what Hugo means.

Whatever, by now, I was old enough to know explanations are usually wrong. We never quite understand and we can't quite explain.

Yet we keep trying. Humans are an optimistic animal, which is one of the reasons we find them so endearing.

... at least for me, what does turn me on lies in a region of myself that could not be changed by the nature of my employment. But it seems important (to me even gratifying) that the same region lies untouched and unchanged in a lot of people, and in my innocent way I wonder if it is reason for hope. Hope for what? I don't know. Maybe hope that humanity will always survive civilization.

This paragraph comes on the last page of the book and expresses one of the core tenets of Hugo's view of poetry and poets. He fought in World War 2 as a young man, then worked in a Boeing factory for 15-20 years, and then became an English professor at a university. No matter the day job, he was always a poet. I have never been a poet, but I know quite well the region of which he speaks.

Also: I love the sentence, "Maybe hope that humanity will always survive civilization."


Posted by Eugene Wallingford | Permalink | Categories: Personal, Software Development, Teaching and Learning

September 27, 2021 2:04 PM

Programming Like a Student Again

For the first time in many years, I got the urge this fall to implement the compiler project I set before my students.

I've written here about this course many times over the years. It serves students interested in programming languages and compilers as well as students looking for a big project course and students looking for a major elective. Students implement a compiler for a small language by hand in teams of 2-5, depending on the source language and the particular group of people in the course.

Having written small compilers like this many times, I don't always implement the entire project each offering. That would not be a wise use of my time most semesters. Instead, when something comes up in class, I will whip up a quick scanner or type checker or whatever so that we can explore an idea. In recent years, the bits I've written have tended to be on the backend, where I have more room to learn.

But this fall, I felt the tug to go all in.

I created a new source language for the class this summer, which I call Twine. Much of its concrete syntax was inspired by SISAL, a general-purpose, single-assignment functional language with implicit parallelism and efficient array handling. SISAL was designed in the mid-1980s to be a high-level language for large numerical programs, to be run on a variety of multiprocessors. With advances in multiprocessors and parallel processors, SISAL is well suited for modern computation. Of course, it contains many features beyond what we can implement in a one-semester compiler course where students implement all of their own machinery. Twine is essentially a subset of SISAL, with a few additions and modifications aimed at making the language more suitable for our undergraduate course.

the logo of the Twine programming language

(Whence the name "Twine"? The name of SISAL comes from the standard Unix word list. It was chosen because it contains the phrase "sal", which is an acronym for "single assignment language". The word "sisal" itself is the name of a flowering plant native to southern Mexico but widely cultivated around the world. Its leaves are crushed to extract a fiber that is used to create rope and twine. So, just as the sisal plant is used to create twine, the SISAL programming language was used to create the Twine programming language.)

With new surface elements, the idea of implementing a new front end appealed to me. Besides, the experience of implementing a complete system feels different than implementing a one-off component... That's one of the things we want our students to experience in our project courses! After eighteen months of weirdness and upheaval at school and in the world, I craved that sort of connection to some code. So here I am.

Knocking out a scanner in my free time over the last week and getting started on my parser has been fun. It has also reminded me how the choice of programming language affects how I think about the code I am writing.

I decided to implement in Python, the language most of my student teams are using this fall, so that I might have recent experience with specific issues they encounter. I'd forgotten just how list-y Python is. Whenever I look at Python code on the web, it seems that everything is a list or a dictionary. The path of least resistance flows that way... If I walk that path, I soon find myself with a list of lists of lists, and my brain is swimming in indices. Using dictionaries replaces integer indices with keys of other types, but the conceptual jumble remains.

It did not take me long to appreciate anew why I like to work with objects. They give me the linguistic layers I need to think about my code independent of languages primitives. I know, I know, I can achieve the same thing with algebraic types and layers of function definitions. However, my mind seems to work on a wavelength where data encapsulation and abstract messages go together. Blame Smalltalk for ruining me, or enlightening me, whichever your stance.

Python provides a little extra friction to classes and objects that seems to interrupt my flow occasionally. For a few minutes this week, I felt myself missing Java and wondering if I ought to have chosen it for the project instead of Python. I used to program in Java every day, and this was the first time in a long while that I felt the pull back. After programming so much in Racket the last decade, though, the wordiness of Java keeps me away. Alas, Python is not the answer. Maybe I'm ready to go deep on a new language, but which one? OOP doesn't seem to be in vogue these days. Maybe I need to return to Ruby or Smalltalk.

For now I will live with OOP in Python and see whether its other charms can compensate. Living with Python's constraints shows up as a feature of another choice I made for this project: to let pycodestyle tell me how to format my code. This is an obstacle for any programmer who is as idiosyncratic as I am. After a few rounds of reformatting my code, though, I am finding surrender easier to accept. This has freed me to pay attention to more important matters, which is one of the keys ideas behind coding and style standards in the first place. But I am a slow learner.

It's been fun so far. I look forward to running Twine programs translated by my compiler in a few weeks! As long as I've been programming, I have never gotten over the thrill of watching my compiler I've written -- or any big program I've written -- do its thing. Great joy.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 29, 2021 10:19 AM

Launching the Compiler Project with New Uncertainties

We will be forming project teams in my course this week, and students will begin work in earnest on Friday. Or so thinks the prof, who releases the first assignment on Thursday... I can dream.

I noticed one change this year when I surveyed students about their preferences for forming teams. In an ordinary year, most students submit at least one or two names of others in the class with whom they'd like to work; some already have formed the teams they want to work in. A few indicate someone they'd rather not work with, usually based on experiences in previous courses. This helps me help them form teams with a mix of new and familiar, with some hedge against expected difficulties. It's never perfect, but most years we end up with a decent set of teams and project experiences.

This year, though, students barely offered any suggestions for forming teams. Most students expressed no preference for whom they want to work with, and no one indicated someone they don't want to work with.

At first, this seemed strange to me, but then I realized that it is likely an effect of three semesters distorted by COVID-19. With one semester forced online and into isolation, a second semester with universal masking, no extracurricular activities, and no social life, and a third semester with continued masking and continued encouragement not to gather, these students have had almost no opportunitiy to get to know one another!

This isolation eliminates one of the great advantages of a residential university, both personally and professionally. I made so many friends in college, some of whom I'm still close to, and spent time with them whenever I wasn't studying (which, admittedly, was a lot). But it also affects the classroom, where students build bonds over semesters of taking courses together in various configurations. Those bonds carry over into a project course such as mine, where they lubricate the wheels of teams who have to work together more closely than before. They at least begin the project knowing each other a bit and sharing a few academic experiences.

Several students in my class this semester said, "I have no friends in this class" or even "I don't know any other CS majors". That is sad. It also raises the stakes for the compiler project, which may be there only chance to make acquaintances in their major before they graduate. I feel a lot more responsibility as I begin to group students into teams this semester, even as I know that I have less information available than ever before for doing a credible job.

I'm going to keep all this in mind as the semester unfolds and pay closer attention to how students and teams seem to be doing. Perhaps this course can not only help them have a satisfying and educational experience building a big piece of software, but also help them form some of the personal bonds that add grace notes to their undergrad years.

~~~~~

On an unrelated note, I received word a couple of weeks ago that this blog had been selected by Feedspot as one of the Top 20 Computer Science Blogs on the web. It's always nice to be recognized in this way. Given how little little I've blogged over the last couple of years, it is rather generous to include me on this list! I see there a number of top-quality blogs, several of which I read religiously, and most of which post entries with admirable regularity. It remains a goal of mine to return to writing here more regularly. Perhaps two entries within a week, light as they are, offer hope.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 27, 2021 3:30 PM

Back To My Compilers Course

Well, a month has passed. Already, the first week of classes are in the books. My compiler course is off to as good a start as one might hope.

Week 1 of the course is an orientation to the course content and project. Content-wise, Day 1 offers a bird's-eye view of what a compiler does, then Day 2 tries to give a bird's-eye view of how a compiler works. Beginning next week, we go deep on the stages of a compiler, looking at techniques students can use to implement their compiler for a small language. That compiler project is the centerpiece and focus of the course.

Every year, I think about ways to shake up this course. (Well, not last year, because we weren't able to offer it due to COVID.) As I prepared for the course, I revisited this summary of responses to a Twitter request from John Regehr: What should be taught in a modern undergrad compiler class? It was a lot of fun to look back through the many recommendations and papers linked there. In the end, though, the response that stuck with me came from Celeste Hollenbeck, who "noted the appeal of focusing on the basics over esoterica": compilers for the masses, not compilers for compiler people.

Our class is compilers for everyone in our major, or potentially so. Its main role in our curriculum is to be one of four so-called project courses, which serve as capstones for a broad set of electives. Many of the students in the course take it to satisfy their project requirement, others take it to satisfy a distribution requirement, and a few take it just because it sounds like fun.

The course is basic, and a little old-fashioned, but that works for us. The vast majority of our students will never write a compiler again. They are in the course to learn something about how compilers work conceptually and to learn what it is like to build a large piece of software with a team. We talk about modern compiler technology such as LLVM, but working with such complicated systems would detract from the more general goals of the course for our students. Some specific skills for writing lexers and scanners, a little insight into how compilers work, and experience writing a big program with others (and living with design decisions for a couple of months!) are solid outcomes for an undergrad capstone project.

That's not to say that some students don't go on to do more with compilers... Some do. A few years ago, one of our undergrads interviewed his way into an internship with Sony PlayStation's compiler team, where he now works full time. Other students have written compilers for their own languages, including one that was integrated as a scripting language into a gaming engine he had built. In that sense, the course seems to serve the more focused students well, too.

Once more unto the breach, dear friends, once more...
-- Henry V

So, we are off. I still haven't described the source language my students will be processing this semester, as promised in my last post. Soon. Since then, though, I wrote a bunch of small programs in the language just to get a feel for it. That's as much fun as a department head gets to have most days these days.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 19, 2021 3:07 PM

Today's Thinking Prompts, in Tweets

On teaching, via Robert Talbert:

Look at the course you teach most often. If you had the power to remove one significant topic from that course, what would it be, and why?

I have a high degree of autonomy in most of the courses I teach, so power isn't the limiting factor for me. Time is a challenge to making big changes, of course. Gumption is probably what I need most right now. Summer is a great time for me to think about this, both for my compiler course this fall and programming languages next spring.

On research, via Kris Micinski:

i remember back to Dana Scott's lecture on the history of the lambda calculus where he says, "If Turing were alive today, I don't know what he'd be doing, but it wouldn't be recursive function theory." I think about that a lot.

Now I am, too. Seriously. I'm no Turing, but I have a few years left and some energy to put into something that matters. Doing so will require some gumption to make other changes in my work life first. I am reaching a tipping point.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

May 06, 2021 3:19 PM

Sometimes You Have To Just Start Talking

I have been enjoying a few of James Propp's essays recently. Last month he wrote about the creation of zero. In Who Needs Zero, he writes:

But in mathematics, premature attempts to reach philosophical clarity can get in the way of progress both at the individual level and at the cultural level. Sometimes you have to just start talking before you understand what you're talking about.

This reminded me of a passage by Iris Murdoch in Metaphysics as a Guide to Morals, which I encountered in one of Robin Sloan's newsletters:

The achievement of coherence is itself ambiguous. Coherence is not necessarily good, and one must question its cost. Better sometimes to remain confused.

My brain seems hardwired to seek out and create abstractions. Perhaps it's just a deeply ingrained habit. Even so I am a pragmatist at heart. As Propp says, "Zero is as zero does."

Allowing oneself to remain confused, to forge ahead without having reached clarity yet, is essential to doing research, or to learning anything at all, really.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 30, 2021 1:56 PM

Good News at the End of a Long Year, v2.0

A couple of weeks ago, a former student emailed me after many years. Felix immigrated to the US from the Sudan back in the 1990s and wound up at my university, where he studied computer science. While in our program, he took a course or two with me, and I supervised his undergrad research project. He graduated and got busy with life, and we lost touch.

He emailed to let me know that he was about to defend his Ph.D. dissertation, titled "Efficient Reconstruction and Proofreading of Neural Circuits", at Harvard. After graduating from UNI, he programmed at DreamWorks Interactive and EA Sports, before going to grad school and working to "unpack neuroscience datasets that are almost too massive to wrap one's mind around". He defended his dissertation successfully this week.

Congratulations, Dr. Gonda!

Felix wrote initially to ask permission to acknowledge me in his dissertation and defense. As I told him, it is an honor to be remembered so fondly after so many years. People often talk about how teachers affect their students' futures in ways that are often hard to see. This is one of those moments for me. Arriving at the end of what has been a challenging semester in the classroom for me, Felix's note boosted my spirit and energizes me a bit going into the summer.

If you'd like to learn more about Felix and his research, here is his personal webpage The Harvard School of Engineering also has a neat profile of Felix that shows you what a neat person he is.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

April 23, 2021 3:59 PM

A Positive Story at the End of a Long Year

This is short story about a student finding something helpful in class and making my day, preceded by a long-ish back story.

In my programming languages course yesterday, I did a session on optimization. It's a topic of some importance, and students are usually interested in what it means for an interpreter or compiler to "optimize" code. I like to show students a concrete example that demonstrates the value of an optimization. Given where we are in the course and the curriculum, though, it would be difficult to do that with a full-featured language such as Python or Java, or even Racket. On the other end of the spectrum, the little languages they have been implementing and using all semester are too simple to benefit from meaningful optimization.

I found a sweet spot in between these extremes with BF. (Language alert!) I suppose it is more accurate to say that Eli Bendersky found the sweet spot, and I found Bendersky's work. Back in 2017, he wrote a series of blog posts on how to write just-in-time compilers, using BF as his playground. The first article in that series inspired me to implement something similar in Python and to adapt it for use with my students.

BF is well-suited for my purposes. It is very simple language, consisting of only eight low-level operators. It is possible to write a small interpreter for BF that students with only a background in data structures can understand. Even so, the language is Turing complete, which means that we can write interesting and arbitrarily complex programs.

The low-level simplicity of BF combines with its Turing completeness to create programs that are horribly inefficient if they are interpreted in a naive manner. There are many simple ways to optimize BF programs, including creating a jump table to speed up loops and parsing runs of identical opcodes (moves, increments, and decrements) as more efficient higher-level operators. Even better, the code to implement these optimizations is also understandable to a student with only data structures and a little background in programming languages.

My session is built around a pair of interpreters, one written in a naive fashion and the other implementing an optimization. This semester, we preprocessed BF programs to compute a table that makes jumping to the beginning or end of a loop an O(1) operation just like BF's other six primitives. The speed-up on big BF programs, such as factoring large numbers or computing a Mandelbrot set, is impressive.

Now to the story.

At the end of class, I talk a bit about esoteric languages more broadly as a way for programmers to test the boundaries of programming language design, or simply to have fun. I get to tell students a story about a four-hour flight back from OOPSLA one year during which I decided to roll a quick interpreter for Ook in Scheme. (What can I say; programming is fun.)

To illustrate some of the fun and show that programmers can be artists, too, I demo programs in the language Piet, which is named for the Dutch abstract painter Piet Mondrian. He created paintings that look like this:

a Piet program that prints 'Piet'

That is not a Mondrian, but it is a legal program in the Piet language. It prints 'Piet'. Here is another legal Piet program:

a Piet program that prints 'Hello, World'

It prints "Hello, World". Here's another:

a Piet program that determines if a number is prime

That program reads an integer from standard input, determines whether it is prime or not, and prints 'Y' or 'N'. Finally, how about this:

a Piet program that prints 'tetris'

If you are a certain age, you may notice something special about this image: It is made up exclusively of Tetris pieces. The program prints... "Tetris". Programming truly is an art!

One of my students was inspired. While reviewing the session notes, he searched for more information about Piet online and found this interactive editor. He then used it to create a Piet program in honor of a friend of his who passed away earlier this semester. It prints the Xbox gamertag of his late friend. In his email to me, he said that writing this program was therapeutic.

I'm not sure one of my class sessions has ever had a more important outcome. I'm also not sure that I have ever been happier to receive email from a student.

This has been a tough year for most everyone, and especially for students who are struggling with isolation and countermeasures against a nasty virus. I'm so glad that programming gave one student a little solace, at least for an evening. I'm also glad he shared his story with me.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

March 25, 2021 4:18 PM

Teaching Yourself the Material

A common complaint from students is that the professor makes them teach themselves then material.

From In Defense of Teaching Yourself the Material:

Higher education institutions must orient themselves toward teaching students how to teach themselves, or risk becoming irrelevant. I'll add to the above that self-teaching (and self-regulation) are also valuable job skills. During my time at Steelcase, I learned that what the company wanted was not so much a recent college graduate with straight A's, but someone who could learn quickly and get up to speed without having to pull someone else off their job to teach them. So self-teaching is not only the ultimate goal of higher education and the main instantiation of lifelong learning, it's also what gives graduates a competitive advantage on the job market and sets them up not to be stuck in a loop for their careers. I want my students to be able to say truthfully in an interview, when asked why they should be hired: I know how to learn things, and learn fast, without a lot of hand-holding. That is music to the employer's ears. The word will get out about which colleges equip students well in this area. Similarly for those that don't.

Talbert has more to say about the value of students' being able to teach themselves. One of our jobs as instructors is to provide the scaffolding that students need to learn how to do this effectively in our discipline, and then slowing dismantle the scaffold and let students take over.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 28, 2021 9:47 AM

Find Your Passion? Master Something.

A few weeks ago, a Scott Galloway video clip made the rounds. In it, Galloway was saying something about "finding your passion" that many people have been saying for a long time, only in that style that makes Galloway so entertaining. Here's a great bit of practical advice on the same topic from tech guru Kevin Kelly:

Following your bliss is a recipe for paralysis if you don't know what you are passionate about. A better motto for most youth is "master something, anything". Through mastery of one thing, you can drift towards extensions of that mastery that bring you more joy, and eventually discover where your bliss is.

My first joking thought when I read this was, "Well, maybe not anything..." I mean, I can think of lots of things that don't seem worth mastering, like playing video games. But then I read about professional gamers making hundreds of thousands of dollars a year, so who am I to say? Find something you are good at, and get really good at it. As Galloway says, like Chris Rock before him, it's best to become good at something that other people will pay you for. But mastery of anything opens doors that passion can only bang on.

The key to the "master something, anything" mantra is the next sentence of Kelly's advice. When we master something, our expertise creates opportunities. We can move up or down the hierarchy of activities built from that mastery, or to related domains. That is where we are most likely to find the life that brings us joy. Even better, we will find it in a place where our mastery helps us get through the inevitable drudge work and over the inevitable obstacles that will pop in our way. I love to program, but some days debugging is a slog, and other days I butt up against thorny problems beyond my control. The good news is that I have skills to get through those days, and I like what I'm doing enough to push on through to the more frequent moments and days of bliss.

Passion is wonderful if you have it, but it's hard to conjure up on its own. Mastering a skill, or a set of skills, is something every one of us can do, and by doing it we can find our way to something that makes us happy.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

January 08, 2021 2:12 PM

My Experience Storing an Entire Course Directory in Git

Last summer, I tried something new: I stored the entire directory of materials for my database course in Git. This included all of my code, the course website, and everything else. It worked well.

The idea came from a post or tweet many years ago by Martin Fowler who, if I recall correctly, had put his entire home directory under version control. It sounded like the potential advantages might be worth the cost, so I made a note to try it myself sometime. I wasn't quite ready last summer to go all the way, so I took a baby step by creating my new course directory as a git repo and growing it file by file.

My context is pretty simple. I do almost all of my work on a personal MacBook Pro or a university iMac in my office. My main challenge is to keep my files in sync. When I make changes to a small number of files, or when the stakes of a missing file are low, copying files by hand works fine, with low overhead and no tooling necessary.

When I make a lot of changes in a short period of time, however, as I sometimes do when writing code or building my website, doing things by hand becomes more work. And the stakes of losing code or web pages are a lot higher than losing track of some planning notes or code I've been noodling with. To solve this problem, for many years I have been using rsync and a couple of simple shell scripts to manage code directories and my course web sites.

So, the primary goal for using Git in a new workflow was to replace rsync. Not being a Git guru, as many of you are, I figured that this would also force me to live with git more often and perhaps expand my tool set of handy commands.

My workflow for the semester was quite simple. When I worked in the office, there were four steps:

  1. git merge laptop
  2. [ do some work ]
  3. git commit
  4. git push

On my laptop, the opening and closing git commands changed:

  1. git pull origin main
  2. [ do some work ]
  3. git commit
  4. git push origin laptop

My work on a course is usually pretty straightforward. The most common task is to create files and record information with commit. Every once in a while, I had to back up a step with checkout.

You may say, "But you are not using git for version control!" You would be correct. The few times I checked out an older version of a file, it was usually to eliminate a spurious conflict, say, a .DS_Store file that was out of sync. Locally, I don't need a lot of version control, but using Git this way was a form of distributed version control, making sure that, wherever I was working, I had the latest version of every file.

I think this is a perfectly valid way to use Git. In some ways, Git is the new Unix. It provided me with a distributed filesystem and a file backup system all in one. The git commands ran effectively as fast as their Unix counterparts. My repo was not very much bigger than the directory would have been on its own, and I always had a personal copy of the entire repo with me wherever I went, even if I had to use another computer.

Before I started, several people reminded me that Git doesn't always work well with large images and binaries. That didn't turn out to be much of a problem for me. I had a couple of each in the repo, but they were not large and never changed. I never noticed a performance hit.

The most annoying hiccup all semester was working with OS X's .DS_Store files, which record screen layout information for OS X. I like to keep my windows looking neat and occasionally reorganize a directory layout to reflect what I'm doing. Unfortunately, OS X seems to update these files at odd times, after I've closed a window and pushed changes. Suddenly the two repos would be out of sync only because one or more .DS_Store files had changed after the fact. The momentary obstacle was quickly eliminated with a checkout or two before merging. Perhaps I should have left the .DS_Stores untracked...

All in all, I was pretty happy with the experience. I used more git, more often, than ever before and thus am now a bit more fluent than I was. (I still avoid the hairier corners of the tool, as all right-thinking people do whenever possible.) Even more, the repository contains a complete record of my work for the semester, false starts included, with occasional ruminations about troubles with code or lecture notes in my commit messages. I had a little fun after the semester ended looking back over some of those messages and making note of particular pain points.

The experiment went well enough that I plan to track my spring course in Git, too. This will be a bigger test. I've been teaching programming languages for many years and have a large directory of files, both current and archival. Not only are there more files, there are several binaries and a few larger images. I'm trying decide if I should put the entire folder into git all at once upfront or start with an empty folder a lá last semester and add files as I want or need them. The latter would be more work at early stages of development but might be a good way to clear out the clutter that has built up over twenty years.

If you have any advice on that choice, or any other, please let me know by email or on Twitter. You all have taught me a lot over the years. I appreciate it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 31, 2020 12:33 PM

Go Home and Study

In a conversation with Tyler Cowen, economist Garett Jones said:

... my job in the classroom is not to teach the details of any theory. My job is to give students a reason to feel passionate enough about the topic so that they'll go home for two or three hours and study it on their own.

Perhaps this is a matter of context, but I don't think this assetion is entirely accurate. It might give the wrong impression to the uninitiated by leaving an essential complementary task implicit.

One could read this as saying that the instructor's job is purely one of motivation. Closures! Rah-rah! Get students excited enough to go learn everything about them on the own, and the instructor has succeeded.

If you think that's true, then I can introduce you to many students who have struggled or failed to learn something new despite being excited to learn and putting in a lot of time. They were missing some prerequisite knowledge or didn't have the experience they needed to navigate the complexities of a new area of study. In principle, if they plugged away at it long enough, they would eventually get there, but then why bother having an instructor at all?

So I think that, as instructor, I have two jobs. I do need to motivate students to put in the time and effort they need to study. Learning happens inside the student, and that requires personal study. I also, though, have to help create the conditions under which they can succeed. This involves all sorts of things: giving them essential background, pointing them toward useful resources, helping them practice the skills they'll need to learn effectively, and so on.

Motivation is in some ways a necessary part of the instructor's job. If students don't want to invest time in study and practice, then they will not learn much. But motivation is not sufficient. The instructor must also put the student in position to succeed to learn effectively.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 30, 2020 3:38 PM

That's a Shame

In the middle of an old post about computing an "impossible" integral, John Cook says:

In the artificial world of the calculus classroom, everything works out nicely. And this is a shame.

When I was a student, I probably took comfort in the fact that everything was supposed to work out nicely on the homework we did. There *was* a solution; I just had to find the pattern, or the key that turned the lock. I suspect that I was a good student largely because I was good at finding the patterns, the keys.

It wasn't until I got to grad school that things really changed, and even then course work was typically organized pretty neatly. Research in the lab was very different, of course, and that's where my old skills no longer served me so well.

In university programs in computer science, where many people first learn how to develop software, things tend to work out nicely. That is a shame, too. But it's a tough problem to solve.

In most courses, in particular introductory courses, we create assignments with that have "closed form" solutions, because we want students to practice a specific skill or learn a particular concept. Having a fixed target can be useful in achieving the desired outcomes, especially if we want to help students build confidence in their abilities.

It's important, though, that we eventually take off the training wheels and expose students to messier problems. That's where they have an opportunity to build other important skills they need for solving problems outside the classroom, which aren't designed by a benevolent instructor to have follow a pattern. As Cook says, neat problems can create a false impression that every problem has a simple solution.

Students who go on to use calculus for anything more than artificial homework problems may incorrectly assume they've done something wrong when they encounter an integral in the wild.

CS students need experience writing programs that solve messy problems. In more advanced courses, my colleagues and I all try to extend students' ability to solve less neatly-designed problems, with mixed results.

It's possible to design a coherent curriculum that exposes students to an increasingly messy set of problems, but I don't think many universities do this. One big problem is that doing so requires coordination across many courses, each of which has its own specific content outcomes. There's never enough time, it seems, to teach everything about, say, AI or databases, in the fifteen weeks available. It's easier to be sure that we cover another concept than it is to be sure students take a reliable step along the path from being able to solve elementary problems to being able to solve to the problems they'll find in the wild.

I face this set of competing forces every semester and do my best to strike a balance. It's never easy.

Courses that involve large systems projects are one place where students in my program have a chance to work on a real problem: writing a compiler, an embedded real-time system, or an AI-based system. These courses have closed form solutions of sorts, but the scale and complexity of the problems require students to do more than just apply formulas or find simple patterns.

Many students thrive in these settings. "Finally," they say, "this is a problem worth working on." These students will be fine when they graduate. Other students struggle when they have to do battle for the first time with an unruly language grammar or a set of fussy physical sensors. One of my challenges in my project course is to help this group of students move further along the path from "student doing homework" to "professional solving problems".

That would be a lot easier to do if we more reliably helped students take small steps along that path in their preceding courses. But that, as I've said, is difficult.

This post describes a problem in curriculum design without offering any solutions. I will think more about how I try to balance the forces between neat and messy in my courses, and then share some concrete ideas. If you have any approaches that have worked for you, or suggestions based on your experiences as a student, please email me or send me a message on Twitter. I'd love to learn how to do this better.

I've written a number of posts over the years that circle around this problem in curriculum and instruction. Here are three:

I'm re-reading these to see if past me has any ideas for present-day me. Perhaps you will find them interesting, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 28, 2020 9:23 AM

Fall Semester Hit Me Harder Than I Realized

(This is an almost entirely personal entry. If that's not your thing, feel free to move on.)

Last Monday morning, I sat down and wrote a blog entry. It was no big deal, just an observation from some reading I've been doing. A personal note, a throwaway. I'm doing the same this morning.

I'd forgotten how good that can feel, which tells you something about my summer and fall.

Last spring, I wrote that I would be teaching a new course in the fall. I was pretty excited for the change of pace, even if it meant not teaching my compiler course this year, and was already thinking ahead to course content, possible modes of delivery in the face of Covid-19, and textbooks.

Then came summer.

Some of my usual department head tasks, like handling orientation sessions for incoming first-year students, were on the calendar, but the need to conduct them remotely expanded what used to be a few hours of work each week to a few hours each day. (It must have been even worse for the university staff who organize and run orientation!) Uncertainty due to the pandemic and the indecisiveness of upper administration created new work, such as a seemingly endless discussion of fall schedule, class sizes, and room re-allocation.

One of the effects of all this work was that, when August rolled around, I was not much better prepared to teach my class than I had been in May when I solicited everyone's advice.

Once fall semester started, my common refrain in conversations with friends was, "It feels like I'm on a treadmill." As soon as I finished preparing a week's in-class session, I had to prepare the week's online activity. Then there were homework assignments to write, and grade. Or I had to write an exam, or meet with student's to discuss questions or difficulties, made all the more difficult but the stress the pandemic placed on them. I never felt like I could take a day or hour off, and when I did, class was still there in my mind, reminding of all I had to do before another week began and the cycle began again.

That went on for fourteen weeks. I didn't feel out of sorts so much as simply always busy. It would be over soon enough, my rational mind told me.

When the semester ended at Thanksgiving, the treadmill of new work disappeared and all that was left was the grading. I did not rush that work, letting it spread over most of the week and a half I had before grades were due. I figured that it was time to decompress a bit.

After grades were in and I had time to get back to normal, I noticed some odd things happening. First of all I was sleeping a lot: generous naps most days, and a couple of Fridays where I was in bed for ten hours of rest (followed, predictably, by a nap later in the day). I'm no insomniac by nature, but this was much more sleep than I usually take, or need.

My workout data told a tale of change, too. My elliptical and bike performances had been steadily showing the small improvements of increased capability through May or so. They leveled off into the summer months, when I was able to ride outside more with my wife. Then fall started, and my performance levels declined steadily into November. The numbers started to bounce back in December, and I feel as strong as I've felt in a long while.

I guess fall semester hit me harder than I realized.

In most ways, I feel like I'm back to normal now. I guess we will find out next week, when my attention turns to spring semester, both as department head and instructor. At least I get to teach programming languages, where I have a deep collection of session materials in hand and years of thinking and practice to buoy me up. Even with continued uncertainty due to the pandemic, I'm in pretty good shape.

Another effect of the summer and fall was that my already too-infrequent blogging slowed to a trickle. The fate of most blogs is a lack of drama. For me, blogging tends to flag when I am not reading or doing interesting work. The gradual expansion of administrative duties over the last few years has certainly taken its toll. But teaching a new course usually energizes me and leads to more regularly writing here. That didn't happen this fall.

With the semester now over, I have a better sense of the stress I must have been feeling. It affected my sleep, my workouts, and my teaching. It's no surprise that it affected my writing, too.

One of my goals for the coming year is to seek the sort of conscious, intentional awakening of the senses that Gide alludes to the passage quoted by that blog post. I'm also going to pay better attention to the signs that the treadmill is moving too fast. Running faster isn't always the solution.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

November 29, 2020 2:26 PM

Speaking in a Second Language

From this enlightening article that was being passed around a while back:

Talking posed a challenge for me. While my Mandarin was strong for someone who had grown up in the US, I wasn't fluent enough to express myself in the way I wanted. This had some benefits: I had to think before I spoke. I was more measured. I was a better listener. But it was also frustrating, as though I'd turned into a person who was meek and slow on the uptake. It made me think twice about the Chinese speakers at work or school in the US whom I'd judged as passive or retiring. Perhaps they were also funny, assertive, flirtatious, and profane in their native tongue, as I am in mine.

When people in the US talk about the benefits of learning a second language, they rarely, if ever, mention the empathy one can develop for others who speak and work in in a second language. Maybe that's because so few of us Americans learn a foreign language well enough to reach this level of enlightenment.

I myself learned just enough German in school to marvel at the accomplishment of exchange students studying here in their second language, knowing that I was nowhere near ready to live and study in a German-speaking land. Marvel, though, is not quite as valuable in this context as empathy.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

November 28, 2020 11:04 AM

How Might A Program Help Me Solve This Problem?

Note: The following comes from the bottom of my previous post. It gets buried there beneath a lot of code and thinking out loud, but it's a message that stands on its own.

~~~~~

I demo'ed a variation of my database-briven passphrase generator to my students as we closed the course last week. It let me wrap up my time with them by reminding them that they are developing skills that can change how they see every problem they encounter in the future.

Knowing how to write programs gives you a new power. Whenever you encounter a problem, you can ask yourself, "How might a program help me solve this?"

The same is true for many more specialized CS skills. People who know how to create a language and implement an interpreter can ask themselves, "How might a language help me solve this problem?" That's one of the outcomes, I hope, of our course in programming languages.

The same is true for databases, too. Whenever you encounter a problem, you can ask yourself, "Can a database help me solve this?"

Computer science students can use the tools they learn each semester to represent and interpret information. That's a power they can use to solve many problems. It's easy to lose sight of that fact during a busy semester and worth reflecting on in calmer moments.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 25, 2020 12:15 PM

Three Bears-ing a Password Generator in SQLite

A long semester -- shorter in time than usual by more than a week, but longer psychologically than any in a long time -- is coming to an end. Teaching databases for the first time was a lot of fun, though the daily grind of preparing so much new material in real time wore me down. Fortunately, I like to program enough that there were moments of fun scattered throughout the semester as I played with SQLite for the first time.

In the spirit of the Three Bears pattern, I looked for opportunities all semester to use SQLite to solve a problem. When I read about the Diceware technique for generating passphrases, I found one.

Diceware is a technique for generating passphrases using dice to select words from the "Diceware Word List", in which each word is paired with a five digit number. All of the digits are between one and six, so five dice rolls are all you need to select a word from the list. Choose a number of words for the passphrase, roll your dice, and select your words.

The Diceware Word List is a tab-separated file of dice rolls and words. SQLite can import TSV data directly into a table, so I almost have a database. I had to preprocess the file twice to make it importable. First, the file is wrapped as a PGP signed message, so I stripped the header and footer by hand, to create diceware-wordlist.txt.

Second, some of the words in this list contain quote characters. Like many applications, SQLite struggles with CSV and TSV files that contain embedded quote characters. There may be some way to configure it to handle these files gracefully, but I didn't bother looking for one. I just replaced the ' and " characters with _ and __, respectively:

    cat diceware-wordlist.txt \
      | sed "s/\'/_/g"        \
      | sed 's/\"/__/g'       \
      > wordlist.txt

Now the file is ready to import:

    sqlite> CREATE TABLE WordList(
       ...>    diceroll CHAR(5),
       ...>    word VARCHAR(30),
       ...>    PRIMARY KEY(diceroll)
       ...>    );

sqlite> .mode tabs sqlite> .import 'wordlist.txt' WordList

sqlite> SELECT * FROM WordList ...> WHERE diceroll = '11113'; 11113 a_s

That's one of the words that used to contain an apostrophe.

So, I have a dice roll/word table keyed on the dice roll. Now I want to choose words at random from the table. To do that, I needed a couple of SQL features we had not used in class: random numbers and string concatenation. The random() function returns a big integer. A quick web search showed me this code to generate a random base-10 digit:

    SELECT abs(random())%10
    FROM (SELECT 1);
which is easy to turn into a random die roll:
    SELECT 1+abs(random())%6
    FROM (SELECT 1);

I need to evaluate this query repeatedly, so I created a view that wraps the code in what acts, effectively, a function:

    sqlite> CREATE VIEW RandomDie AS
       ...>   SELECT 1+abs(random())%6 AS n
       ...>   FROM (SELECT 1);

Aliasing the random value as 'n' is important because I need to string together a five-roll sequence. SQL's concatenation operator helps there:

    SELECT 'Eugene' || ' ' || 'Wallingford';

I can use the operator to generate a five-character dice roll by selecting from the view five times...

    sqlite> SELECT n||n||n||n||n FROM RandomDie;
    21311
    sqlite> SELECT n||n||n||n||n FROM RandomDie;
    63535

... and then use that phrase to select random words from the list:

    sqlite> SELECT word FROM WordList
       ...> WHERE diceroll =
       ...>         (SELECT n||n||n||n||n FROM RandomDie);
    fan

Hurray! Diceware defaults to three-word passphrases, so I need to do this three times and concatenate.

This won't work...

    sqlite> SELECT word, word, word FROM WordList
       ...> WHERE diceroll =
       ...>         (SELECT n||n||n||n||n FROM RandomDie);
    crt|crt|crt
... because the dice roll is computed only once. A view can help us here, too:
    sqlite> CREATE VIEW OneRoll AS
       ...>   SELECT word FROM WordList
       ...>   WHERE diceroll =
       ...>           (SELECT n||n||n||n||n FROM RandomDie);

OneRoll acts like a table that returns a random word:

    sqlite> SELECT word FROM OneRoll;
    howdy
    sqlite> SELECT word FROM OneRoll;
    scope
    sqlite> SELECT word FROM OneRoll;
    snip

Almost there. Now, this query generates three-word passphrases:

    sqlite> SELECT Word1.word || ' ' || Word2.word || ' ' || Word3.word FROM
       ...>   (SELECT * FROM OneRoll) AS Word1,
       ...>   (SELECT * FROM OneRoll) AS Word2,
       ...>   (SELECT * FROM OneRoll) AS Word3;
    eagle crab pinch

Yea! I saved this query in gen-password.sql and saved the SQLite database containing the table WordList and the views RandomDie and OneRoll as diceware.db. This lets me generate passphrases from the command line:

    $ sqlite3 diceware.db < gen-password.sql
    ywca maine over
Finally, I saved that command in a shell script named gen-password, and I now have passphrase generator ready to use with a few keystrokes. Success.

Yes, this is a lot of work to get a simple job done. Maybe I could do better with Python and a CSV reader package, or some other tools. But that wasn't the point. I was revisiting SQL and learning SQLite with my students. By overusing the tools, I learned them both a little better and helped refine my sense of when they will and won't be helpful to me in the future. So, success.

~~~~~

I demo'ed a variation of this to my students on the last day of class. It let me wrap up my time with them by pointing out that they are developing skills which can change how they see every problem they encounter in the future.

Knowing how to write programs gives you a new power. Whenever you encounter a problem, you can ask yourself, "How might a program help me solve this?" I do this daily, both as faculty member and department head.

The same is true for many more specialized CS skills. People who know how to create a language and implement an interpreter can ask themselves, "How might a language help me solve this problem?" That's one of the outcomes, I hope, of our course in programming languages.

The same is true for databases. When I came across a technique for generating passphrases, I could ask myself, "How might a database help me build a passphrase generator?"

Computer science students can use the tools they learn each semester to represent and interpret information. That's a power they can use to solve many problems. It's easy to lose sight of this incredible power during a hectic semester, and worth reflecting on in calmer moments after the semester ends.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 19, 2020 2:27 PM

An "Achievement Gap" Is Usually A Participation Gap

People often look at the difference between the highest-rated male chess player in a group and the highest-rated female chess player in the same group and conclude that there is a difference between the abilities of men and women to play chess, despite the fact that there are usually many, many more men in the group than women. But that's not even good evidence that there is an achievement gap. From What Gender Gap in Chess?:

It's really quite simple. Let's say I have two groups, A and B. Group A has 10 people, group B has 2. Each of the 12 people gets randomly assigned a number between 1 and 100 (with replacement). Then I use the highest number in Group A as the score for Group A and the highest number in Group B as the score for Group B. On average, Group A will score 91.4 and Group B 67.2. The only difference between Groups A and B is the number of people. The larger group has more shots at a high score, so will on average get a higher score. The fair way to compare these unequally sized groups is by comparing their means (averages), not their top values. Of course, in this example, that would be 50 for both groups -- no difference!

I love this paragraph. It's succinct and uses only the simplest ideas from probability and statistics. It's the sort of statistics that I would hope our university students learn in their general education stats course. While learning a little math, students can also learn about an application that helps us understand something important in the world.

The experiment described is also simple enough for beginning programmers to code up. Over the years, I've used problems like this with intro programming students in Pascal, Java, and Python, and with students learning Scheme or Racket who need some problems to practice on. I don't know whether learning science supports my goal, but I hope that this sort of problem (with suitable discussion) can do double duty for learners: learn a little programming, and learn something important about the world.

With educational opportunities like this available to us, we really should be able to turn graduates who have a decent understanding of why so many of our naive conclusions about the world are wrong. Are we putting these opportunities to good use?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

September 18, 2020 2:50 PM

If You Want to Create Greatness, Encourage Everyone

I read two passages in the last few days that echo one another. First, I read this from Wallace Shawn, in a Paris Review interview:

But my God, without writers, humanity might be trapped in a swamp of idiotic, unchanging provincial clichés. Yes, there are writers who merely reinforce people's complacency, but a writer like Rachel Carson inspired the activism of millions, and writers like Lady Murasaki, Milton, and Joyce have reordered people's brains! And for any writers to exist at all, there must surely be a tradition of writing. Maybe in order for one valuable writer to exist, there must be a hundred others who aren't valuable at all, but it isn't possible at any given moment for anyone to be sure who the valuable one is.

Then, in his response to Marc Andreessen's "It's Time to Build", Tanner Greer writes:

To consistently create brilliant poets, you need a society awash in mediocre, even tawdry poetry. Brilliant minds will find their way towards poem writing when poem writing and poem reading is the thing that people do.

I once blogged briefly about The Art of Fear tells the story of an artist sketching hundreds of roosters, which laid the foundation for creating a single sketch for his client. For us as individuals, this means that "talent is rarely distinguishable, over the long run, from perseverance and lots of hard work." As Richard Gabriel often says, "Talent determines only how fast you get good, not how good you get". Volume creates the conditions under which quality can appear.

Shawn and Greer remind us that the same dynamic applies at the scale of a culture. A community that produces many writers has a better chance to produce great writers. When the community values writers, it increases the chances that someone will do the work necessary to becoming a writer. The best way to produce a lot of writers is to have a tradition that welcomes, even encourages, all members of the community to write, even if they aren't great.

The same applies to other forms of achievement, too. In particular, I think it applies to programming.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 22, 2020 4:09 PM

Learning Something You Thought You Already Knew

Sandi Metz's latest newsletter is about the heuristic not to name a class after the design pattern it implements. Actually, it's about a case in which Metz wanted to name a class after the pattern it implements in her code and then realized what she had done. She decided that she either needed to have a better reason for doing it than "because it just felt right" or she needed practice what she preaches to the rest of us. What followed was some deep thinking about what makes the rule a good one to follow and her efforts to put her conclusions in writing for the benefit of her readers.

I recognize with Metz's sense of discomfort at breaking a rule when it feels right and her need to step back and understand the rule at a deeper level. Between her set up and her explanation, she writes:

I've built a newsletter around this rule not only because I believe that it's useful, but also because my initial attempts to explain it exposed deep holes in my understanding. This was a revelation. Had I not been writing a book, I might have hand-waved around these gaps in my knowledge forever.

People sometimes say, "If you you really want to understand something, teach it to others." Metz's story is a great example of why this is really true. I mean, sure, you can learn any new area and then benefit from explaining it to someone else. Processing knowledge and putting it in your own words helps to consolidate knowledge at the surface. But the real learning comes when you find yourself in a situation where you realize there's something you've taken for granted for months or for years, something you thought you knew, but suddenly you sense a deep hole lying under the surface of that supposed understanding. "I just know breaking the rule is the right thing to do here, but... but..."

I've been teaching long enough to have had this experience many times in many courses, covering many areas of knowledge. It can be terrifying, at least momentarily. The temptation to wave my hands and hurry past a student's question is enormous. To learn from teaching in these moments requires humility, self-awareness, and a willingness to think and work until you break through to that deeper understanding. Learning from these moments is what sets the best teachers and writers apart from the rest of us.

As you might guess from Metz's reaction to her conundrum, she's a pretty good teacher and writer. The story in the newsletter is from the new edition of her book "99 Bottles of OOP", which is now available. I enjoyed the first edition of "99 Bottles" and found it useful in my own teaching. It sounds like the second edition will be more than a cleanup; it will have a few twists that make it a better book.

I'm teaching our database systems course for the first time ever this fall. This is a brand new prep for me: I've never taught a database course before, anywhere. There are so many holes in my understanding, places where I've internalized good practices but don't grok them in the way an expert does. I hope I have enough humility and self-awareness this semester to do my students right.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

July 01, 2020 3:19 PM

Feeling Unstuck Amid the Pandemic

Rands recently wrote about his work-from-home routine. I love the idea of walking around a large wooded yard while doing audio meetings... One of his reasons for feeling so at ease struck a chord with me:

Everyone desperately wants to return to normality. I am a professional optimist, but we are not returning to normal. Ever. This is a different forever situation, and the sooner we realize that and start to plan accordingly, the sooner we will feel unstuck.

I have written or spoken a variation of this advice so many times over my fifteen years as department head, most often in the context of state funding and our university budget.

Almost every year for my first decade as head, we faced a flat or reduced budget, and every time several university colleagues expressed a desire to ride the storm out: make temporary changes to how we operate and wait for our budgets to return to normal. This was usually accompanied by a wistful desire that we could somehow persuade legislators of our deep, abiding value and thus convince them to allocate more dollars to the university or, failing that, that new legislators some future legislature would have different priorities.

Needless to say, the good old days never returned, and our budget remained on a downward slide that began in the late 1990s. This particular form of optimism was really avoidance of reality, and it led to many people living in a state of disappointment and discomfort for years. Fortunately, over the last five or ten years, most everyone has come to realize that what we have now is normal and has begun to plan accordingly. It is psychologically powerful to accept reality and begin acting with agency.

As for the changes brought on by the pandemic, I must admit that I am undecided about how much of what has changed over the last few months will be the normal way of the university going forward.

My department colleagues and I have been discussing how the need for separation among students in the classroom affects how we teach. Our campus doesn't have enough big rooms for everyone to move each class into a room with twice the capacity, so most of us are looking at ways to teach hybrid classes, with only half of our students in the classroom with us on any given day. This makes most of us sad and even a little depressed: how can we teach our courses as well as we always have in the past when new constraints don't allow us to do what we have optimized our teaching to do?

I have started thinking of the coming year in terms of hill climbing, an old idea from AI. After years of hard work and practice, most of us are at a local maximum in our teaching. The pandemic has disoriented us by dropping us at a random point in the environment. The downside of change in position is that we are no longer at our locally-optimal point for teaching our courses. The upside is that we get to search again under new conditions. Perhaps we can find a new local maximum, perhaps even one higher than our old max. If not, at least we have conducted a valuable experiment under trying conditions and can use what we learn going forward.

This analogy helps me approach my new course with more positive energy. A couple of my colleagues tell me it has helped them, too.

As many others have noted, the COVID-19 crisis has accelerated a few changes that were already taking place in our universities, in particular in the use of digital technology to engage students and to replace older processes. Of the other changes we've seen, some will certainly stick, but I'm not sure anyone really knows which ones. Part of the key to living with the uncertainty is not to tie ourselves too closely to what we did before.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Teaching and Learning

June 17, 2020 3:53 PM

Doing Concatenative Programming in a Spreadsheet

a humble spreadsheet cell

It's been a long time since I was excited by a new piece of software the way I was excited by Loglo, Avi Bryant's new creation. Loglo is "LOGO for the Glowforge", an experimental programming environment for creating SVG images. That's not a problem I need to solve, but the way Loglo works drew me in immediately. It consists of a stack programming language and a set of primitives for describing vector graphics, integrated into a spreadsheet interface. It's the use of a stack language to program a spreadsheet that excites me so much.

Actually, it's the reverse relationship that really excites me: using a spreadsheet to build and visualize a stack-based program. Long-time readers know that I am interested in this style of programming (see Summer of Joy for a post from last year) and sometimes introduce it in my programming languages course. Students understand small examples easily enough, but they usually find it hard to grok larger programs and to fully appreciate how typing in such a language can work. How might Loglo help?

In Loglo, a cell can refer to the values produced by other cells in the familiar spreadsheet way, with an absolute address such as "a1" or "f2". But Loglo cells have two other ways to refer to other cell's values. First, any cell can access the value produced by the cell to its left implicitly, because Loglo leaves the result of a cell's computation sitting on top of the stack. Second, a cell can access the value produced by the cell above it by using the special variable "^". These last two features strike me as a useful way for programmers to see their computations grow over time, which can be an even more powerful mode of interaction for beginners who are learning this programming style.

Stack-oriented programming of this sort is concatenative: programs are created by juxtaposing other programs, with a stack of values implicitly available to every operator. Loglo uses the stack as leverage to enable programmers to build images incrementally, cell by cell and row by row, referring to values on the stack as well as to predecessor cells. The programmer can see in a cell the value produced by a cumulative bit of code that includes new code in the cell itself. Reading Bryant's description of programming in Loglo, it's easy to see how this can be helpful when building images. I think my students might find it helpful when learning how to write concatenative programs or learning how types and programs work in a concatenative language.

For example, here is a concatenative program that works in Loglo as well as other stack-based languages such as Forth and Joy:

 2 3 + 5 * 2 + 6 / 3 /

Loglo tells us that it computes the value 1.5:

a stack program in Loglo

This program consists of eleven tokens, each of which is a program in its own right. More interestingly, we can partition this program into smaller units by taking any subsequences of the program:

 2 3 + 5 *   2 + 6 /   3 /
 ---------   -------   ---
These are the programs in cells A1, B1, and C1 of our spreadsheet. The first computes 25, the second uses that value to compute 4.5, and the third uses the 4.5 to compute 1.5. Notice that the programs in cells B1 and C1 require an extra value to do their jobs. They are like functions of one argument. Rather than pass an argument to the function, Loglo allows it to read a value from the stack, produced by the cell to its left.

a partial function in Loglo

By making the intermediate results visible to the programmer, this interface might help programmers better see how pieces of a concatenative program work and learn what the type of a program fragment such as 2 + 6 / (in cell B1 above) or 3 / is. Allowing locally-relative references on a new row will, as Avi points out, enable an incremental programming style in which the programmer uses a transformation computed in one cell as the source for a parameterized version of the transformation in the cell below. This can give the novice concatenative programmer an interactive experience more supportive than the usual REPL. And Loglo is a spreadsheet, so changes in one cell percolate throughout the sheet on each update!

Am I the only one who thinks this could be a really cool environment for programmers to learn and practice this style of programming?

Teaching concatenative programming isn't a primary task in my courses, so I've never taken the time to focus on a pedagogical environment for the style. I'm grateful to Avi for demonstrating a spreadsheet model for stack programs and stimulating me to think more about it.

For now, I'll play with Loglo as much as time permits and think more about its use, or use of a tool like it, in my courses. There are couple of features I'll have to get used to. For one, it seems that a cell can access only one item left on the stack by its left neighbor, which limits the kind of partial functions we can write into cells. Another is that named functions such as rotate push themselves onto the stack by default and thus require a ! to apply them, whereas operators such as + evaluate by default and thus require quotation in a {} block to defer execution. (I have an academic's fondness for overarching simplicity.) Fortunately, these are the sorts of features one gets used to whenever learning a new language. They are part of the fun.

Thinking beyond Loglo, I can imagine implementing an IDE like this for my students that provides features that Loglo's use cases don't require. For example, it would be cool to enable the programmer to ctrl-click on a cell to see the type of the program it contains, as well as an option to see the cumulative type along the row or built on a cell referenced from above. There is much fun to be had here.

To me, one sign of a really interesting project is how many tangential ideas flow out of it. For me, Loglo is teeming with ideas, and I'm not even in its target demographic. So, kudos to Avi!

Now, back to administrivia and that database course...


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 11, 2020 1:02 PM

Persistence Wins, Even For Someone Like You

There's value to going into a field that you find difficult to grasp, as long as you're willing to be persistent. Even better, others can benefit from your persistence, too.

In an old essay, James Propp notes that working in a field where you lack intuition can "impart a useful freedom from prejudice". Even better...

... there's value in going into a field that you find difficult to grasp, as long as you're willing to be really persistent, because if you find a different way to think about things, something that works even for someone like you, chances are that other people will find it useful too.

This reminded me of a passage in Bob Nystroms's post about his new book, Crafting Interpreters. Nystrom took a long time to finish the book in large part because he wanted the interpreter at the end of each chapter to compile and run, while at the same time growing into the interpreter discussed in the next chapter. But that wasn't the only reason:

I made this problem harder for myself because of the meta-goal I had. One reason I didn't get into languages until later in my career was because I was intimidated by the reputation compilers have as being only for hardcore computer science wizard types. I'm a college dropout, so I felt I wasn't smart enough, or at least wasn't educated enough to hack it. Eventually I discovered that those barriers existed only in my mind and that anyone can learn this.

Some students avoid my compilers course because they assume it must be difficult, or because friends said they found it difficult. Even though they are CS majors, they think of themselves as average programmers, not "hardcore computer science wizard types". But regardless of the caliber of the student at the time they start the course, the best predictor of success in writing a working compiler is persistence. The students who plug away, working regularly throughout the two-week stages and across the entire project, are usually the ones who finish successfully.

One of my great pleasures as a prof is seeing the pride in the faces of students who demo a working compiler at the end of the semester, especially in the faces of the students who began the course concerned that they couldn't hack it.

As Propp points out in his essay, this sort of persistence can pay off for others, too. When you have to work hard to grasp an idea or to make something, you sometimes find a different way to think about things, and this can help others who are struggling. One of my jobs as a teacher is to help students understand new ideas and use new techniques. That job is usually made easier when I've had to work persistently to understand the idea myself, or to find a better way to help the students who teach me the ways in which they struggle.

In Nystrom's case, his hard work to master a field he didn't grasp immediately pays of for his readers. I've been following the growth of Crafting Interpreters over time, reading chapters in depth whenever I was able. Those chapters were uniformly easy to read, easy to follow, and entertaining. They have me thinking about ways to teach my own course differently, which is probably the highest praise I can give as a teacher. Now I need to go back and read the entire book and learn some more.

Teaching well enough that students grasp what they thought was not graspable and do what they thought was not doable is a constant goal, rarely achieved. It's always a work in progress. I have to keep plugging away.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

May 22, 2020 3:34 PM

What Good Can Come From All This?

Jerry Seinfeld:

"What am I really sick of?" is where innovation begins.

Steve Wozniak:

For a lot of entrepreneurs, they see something and they say, "I have to have this," and that will start them building their own.

Morgan Housel:

Necessity is the mother of invention, so our willingness to solve problems is about to surge.

A lot of people are facing a lot of different stresses right now, with the prospect that many of those stresses will continue on into the foreseeable future. For instance, I know a lot of CS faculty who are looking at online instruction and remote learning much carefully now that they may be doing it again in the fall. Many of us have some things to learn, and some real problems need to be solved.

"What am I really sick of?" can turn the dial up on our willingness to solve problems that have been lingering in the background for a while. Let's hope that some good can come from the disruption.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 18, 2020 4:10 PM

Not Receiving Negative Feedback Makes It Hard to Improve

As a financial writer for Forbes and the Wall Street Journal, Jason Zweig has received a lot of letters from readers, many of them forcefully suggesting that his columns could be better. In this interview, he speaks calmly about processing negative feedback:

... when you get negative feedback, you have to sort it. You can't just take all negative feedback and throw it in the "I'm not reading this" bucket. You have to go through it. And you have to say: "Is this person, who says I'm wrong, right or wrong?" Because if the person says you're wrong, and is wrong, then how does that hurt you? But if the person who says you're wrong is right, it's devastating to you if you don't listen.
It's not about winning. It's about learning.

I know profs who refuse to read their student assessments because it's too hard emotionally to deal with negative feedback. I understand the temptation... There are semesters when thirty-nine reviews are positive, yet the one negative review lodges itself in my brain and won't let go. Even after decades of teaching, it can be hard to shake off those comments immediately. And when there many comments that are "constructive" or just plain negative, well, reading the assessments can really demoralize.

But as Zweig says, closing myself off to the feedback is ultimately a losing proposition. Sometimes I assess a comment and decide that it's off the mark, or the result of singular event or experience and therefore isn't worth sweating over. But what about when the reviewer is right? Or when there's a kernel of truth in an otherwise unnecessarily personal comment? Ignoring the truth doesn't do me any good. I want to get better.

I did not receive student assessments this spring. When the university moved to remote instruction suddenly, the administration and faculty agreed to suspended assessments for the semester, with the idea that teaching and learning would both be a bit bumpier than usual under the extreme conditions. Just before the last week of the term, they agreed to bring optional assessments back purely for the prof's personal use, but by then I decided to pass. Some of my students provided some helpful feedback, including constructive criticism, all on their own.

I'll actually miss reading my assessments this month, if not the sudden spike in my blood pressure that sometimes accompanies them. Students are usually helpful and surprisingly generous in their evaluations, and I still usually learn a lot from the positive ones and the negative ones alike.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

May 16, 2020 11:33 AM

Woz Sez: Take Some Time To Get To Know Your Program Better

Steve Wozniak in Founders at Work:

If you can just quickly whip something out and it's done, maybe it's time, once in a while, to think and think and think, "Can I make it better than it is, a little superior?" What that does is not necessarily make the product better in the end, but it brings you closer to the product, and your own head understands it better. Your neurons have gone through the code you wrote, or the circuits you designed, have gone through it more times, and it's just a little more solidly in your head and once in a while you'll wake up and say, "Oh my God, I just realized a bug that's in there, something I hadn't thought of."

Or, if you have to modify something, or add something new, you can do it very quickly when it's all in your head. You don't have to pull out the listing and find out where and maybe make a mistake. You don't make as many mistakes.

Many programmers know this feeling, of having a program in your head and moving in sync with it. When programs are small, it's easy for me to hold a program in my head. As it grows larger and spreads out over many functions, classes, and files, I have to live with it over an extended period of time. Taking one of Woz's dives into the just to work on it is a powerful way to refresh the feeling.

Beginning programmers have to learn this feeling, I think, and we should help them. In the beginning, my students know what it's like to have a program in their head all at once. The programs are small, and the new programmer has to pay attention to every detail. As programs grow, it becomes harder for them. They work locally to make little bits of code work, and suddenly they have a program that does fit naturally in their head. But they don't have the luxury of time to do what Woz suggests, because they are on to the next reading assignment, the next homework, the next class across campus.

One of the many reasons I like project courses such as my compiler course is that students live with the same code for an entire semester. Sure, they finish the scanner and move on to the parser, and then onto a type checker and a code generator, but they use their initial stages every day and live with the decisions they made. It's not uncommon for a student to tell me 1/2 or 3/4 of the way through the course, "I was looking at our scanner (or parser) the other day, and now I understand why we were having that problem I told you about. I'm going to fix it over Thanksgiving break."

In my programming languages course, we close with a three week three assignment project building an interpreter. I love when a student submitting on Part 3 says, "Hey, I just noticed that some parts of Part 2 could be better. I hope you don't mind that I improved that, too." Um, no. No, I don't mind at all. They get it.

It's easy to shortchange our students with too many small projects. I like most of my courses to have at least some code grow over the course of the semester. Students may not have the luxury of a lot of free time, but at least they work in proximity to their old code for a while. Serendipity may strike if we create the right conditions for it.

I have already begun to think about how I can foster this in my new course this fall. I hope to design it into the course upfront.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 14, 2020 2:35 PM

Teaching a New Course in the Fall

I blogged weekly about the sudden switch to remote instruction starting in late March, but only for three weeks. I stopped mostly because my sense of disorientation had disappeared. Teaching class over Zoom started to feel more normal, and my students and I got back into the usual rhythm. A few struggled in ways that affected their learning and performance, and a smaller few thrived. My experience was mostly okay: some parts of my work suffered as I learned how to use tools effectively, but not having as many external restrictions on my schedule offset the negatives. Grades are in, summer break begins to begin, and at least some things are right with the world.

Fall offers something new for me to learn. My fall compilers course had a lower enrollment than usual and, given the university's current financial situation, I had to cancel it. This worked out fine for the department, though, as one of our adjunct instructors asked to take next year off in order to deal with changes in his professional and personal lives. So there was a professor in need of a course, and a course in need of a professor: Database Systems.

Databases is one of the few non-systems CS courses that I have never taught as a prof or as a grad student. It's an interesting course, mixing theory and design with a lot of practical skills that students and employers prize. In this regard, it's a lot of like our OO design and programming course in Java, only with a bit more visible theory. I'm psyched to give it a go. At the very least, I should be able to practice some of those marketable skills and learn some of the newer tools involved.

As with all new preps, this course has me looking for ideas. I'm aware of a few of the standard texts, though I am hoping to find a good open-source text online, or a set of online materials out of which to assemble the readings my students will need for the semester. I'm going to be looking scouting for all the other materials I need to teach the course as well, including examples, homework assignments, and projects. I tend to write a lot of my own stuff, but I also like to learn from good courses and good examples already out there. Not being a database specialist, I am keen to see what specialists think is important, beyond what we find in traditional textbooks.

Then there is the design of the course itself. Teaching a course I've never taught before means not having an old course design to fall back on. This means more work, of course, but is a big win for curious mind. Sometimes, it's fun to start from scratch. I have always found instructional design fascinating, much like any kind of design, and building a new course leaves open a lot of doors for me to learn and to practice some new skills.

COVID-19 is a big part of why I am teaching this course, but it is not done with us. We still do not know what fall semester will look like, other than to assume that it won't look like a normal semester. Will be on campus all semester, online all semester, or a mix of both? If we do hold instruction on campus, as most universities are hoping to do, social distancing requirements will require us to do some things differently, such as meeting students in shifts every other day. This uncertainty suggests that I should design a course that depends less on synchronous, twice-weekly, face-to-face direct instruction and more on ... what?

I have a lot to learn about teaching this way. My university is expanding its professional development offerings this summer and, in addition to diving deep into databases and SQL, I'll be learning some new ways to design a course. It's exciting but also means a bit more teaching prep than usual for my summer.

This is the first entirely new prep I've taught in a while. I think the most recent was the fall of 2009, when I taught Software Engineering for the first and only time. Looking back at the course website reminds me that I created this delightful logo for the course:

course logo for Software Engineering, created using YUML

So, off to work I go. I could sure use your help. Do you know of model database courses that I should know about? What database concepts and skills should CS graduates in 2021 know? What tools should they be able to use? What has changed in the world since I last took database courses that must be reflected in today's database course? Do you know of a good online textbook for the course, or a print book that my students would find useful and be willing to pay for?

If you have any ideas to share, feel free to email me or contact me on Twitter. If not for me, do it for my students!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 10, 2020 2:16 PM

Software Can Make You Feel Alive, or It Can Make You Feel Dead

This week I read one of Craig Mod's old essays and found a great line, one that everyone who writes programs for other people should keep front of mind:

When it comes to software that people live in all day long, a 3% increase in fun should not be dismissed.

Working hard to squeeze a bit more speed out of a program, or to create even a marginally better interaction experience, can make a huge difference to someone who uses that program everyday. Some people spend most of their professional days inside one or two pieces of software, which accentuates further the human value of Mod's three percent. With shelter-in-place and work-from-home the norm for so many people these days, we face a secondary crisis of software that is no fun.

I was probably more sensitive than usual to Mod's sentiment when I read it... This week I used Blackboard for the first time, at least my first extended usage. The problem is not Blackboard, of course; I imagine that most commercial learning management systems are little fun to use. (What a depressing phrase "commercial learning management system" is.) And it's not just LMSes. We use various PeopleSoft "campus solutions" to run the academic, administrative, and financial operations on our campus. I always feel a little of my life drain away whenever I spend an hour or three clicking around and waiting inside this large and mostly functional labyrinth.

It says a lot that my first thought after downloading my final exams on Friday morning was, "I don't have to login to Blackboard again for a good long while. At least I have that going for me."

I had never used our LMS until this week, and then only to create a final exam that I could reliably time after being forced into remote teaching with little warning. If we are in this situation again in the fall, I plan to have an alternative solution in place. The programmer in me always feels an urge to roll my own when I encounter substandard software. Writing an entire LMS is not one of my life goals, so I'll just write the piece I need. That's more my style anyway.

Later the same morning, I saw this spirit of writing a better program in a context that made me even happier. The Friday of finals week is my department's biennial undergrad research day, when students present the results of their semester- or year-long projects. Rather than give up the tradition because we couldn't gather together in the usual way, we used Zoom. One student talked about alternative techniques for doing parallel programming in Python, and another presented empirical analysis of using IR beacons for localization of FIRST Lego League robots. Fun stuff.

The third presentation of the morning was by a CS major with a history minor, who had observed how history profs' lectures are limited by the tools they had available. The solution? Write better presentation software!

As I watched this talk, I was proud of the student, whom I'd had in class and gotten to know a bit. But I was also proud of whatever influence our program had on his design skills, programming skills, and thinking. This project, I thought, is a demonstration of one thing every CS student should learn: We can make the tools we want to use.

This talk also taught me something non-technical: Every CS research talk should include maps of Italy from the 1300s. Don't dismiss 3% increases in fun wherever they can be made.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

April 29, 2020 4:45 PM

Teaching Class is Like Groundhog Day

As I closed down my remote class session yesterday, I felt a familiar feeling... That session can be better! I've been using variations of this session, slowly improving it, for a few years now, and I always leave the classroom thinking, "Wait 'til next time." I'm eager to improve it now and iterate, trying it again tomorrow. Alas, tomorrow is another day, with another class session all its own. Next time is next year.

Bill Murray, suspicion of Groundhog Day

I feel this way about most of the sessions in most of my courses. Yesterday, it occurred to me that this must be what Phil Connors feels like in Groundhog Day.

Phil wakes up every day in the same place and time as yesterday. Part way through the film, he decides to start improving himself. Yet the next morning, there he is again, in the same place and time as yesterday, a little better but still flawed, in need of improvement.

Next spring, when I sit down to prep for this session, it will be like hitting that alarm clock and hearing Sonny and Cher all over again.

I told my wife about my revelation and my longing: If only I could teach this session 10,000 times, I'd finally get it right. You know what she said?

"Think how your students must feel. If they could do that session 10,000 times, they'd feel like they really got it, too."

My wife is wise. My students and I are in this together, getting a little better each day, we hope, but rarely feeling like we've figured all that much out. I'll keep plugging away, Phil Connors as CS prof. "Okay, campers, rise and shine..." Hopefully, today I'll be less wrong than yesterday. I wish my students the same.

Who knows, one of these days, maybe I'll leave a session and feel as Phil does in the last scene of the film, when he wakes up next to his colleague Rita. "Do you know what today is? Today is tomorrow. It happened. You're here." I'm not holding my breath, though.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

April 19, 2020 4:10 PM

I Was a Library Kid, Too

Early in this Paris Review interview, Ray Bradbury says, "A conglomerate heap of trash, that's what I am." I smiled, because that's what I feel like sometimes, both culturally and academically. Later he confessed something that sealed my sense of kinship with him:

I am a librarian. I discovered me in the library. I went to find me in the library. Before I fell in love with libraries, I was just a six-year-old boy. The library fueled all of my curiosities, from dinosaurs to ancient Egypt.

the bookshelf in my home office

I was a library kid, too. I owned a few books, but I looked forward to every chance we had to go to the library. My grade school had books in every classroom, and my teachers shared their personal books with those of us who so clearly loved to read. Eventually my mom took me and my siblings to the Marion County public to get a library card, and the world of books available seemed limitless. When I got to high school, I spent free time before and after classes wandering the stacks, discovering science fiction, Vonnegut and Kafka and Voltaire, science and history. The school librarian got used to finding me in the aisles at times. She became as much a friend as any high school teacher could. So many of my friends have shelves and shelves of books; they talk about their addiction to Amazon and independent bookstores. But almost all of the books I have at home fit in a single bookshelf (at right). One of them is Bradbury's The Martian Chronicles, which I discovered in high school.

I do have a small chess library on another shelf across the room and a few sports books, most from childhood, piled nearby. I tried to get rid of the sports books once, in a fit of Marie Kondo-esque de-cluttering, but I just couldn't. Even I have an attachment to the books I own. Having so few, perhaps my attraction is even stronger than it might otherwise be, subject to some cosmic inverse square law of bibliophilia.

At my office, I do have two walls full of books, mostly textbooks accumulated over my years as a professor. When I retire, though, I'll keep only one bookcase full of those -- a few undergrad CS texts, yes, but mostly books I purchased because they meant something to me. Gödel, Escher, Bach. Metamagical Themas. Models of My Life. A few books about AI. These are books that helped me find me.

After high school, I was fortunate to spend a decade in college as an undergraduate and grad student. I would not trade those years for anything; I learned a lot, made friends with whom I remain close, and grew up. Bradbury, though, continued his life as an autodidact, going to the public library three nights a week for a decade, until he got married.

So I graduated from the library, when I was twenty-seven. I discovered that the library is the real school.

Even though I spent a decade as a student in college and now am a university prof, the library remains my second home. I rarely buy books to this day; I don't remember my last purchase. The university library is next to my office building, and I make frequent trips over in the afternoons. They give me a break from work and a chance to pick up my next read. I usually spend a lot more time there than necessary, wandering the stacks and exploring. I guess I'm still a library kid.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

April 14, 2020 3:53 PM

Going Online, Three-Plus Weeks In

First the good news: after three more sessions, I am less despondent than I was after Week Two. I have taken my own advice from Week One and lowered expectations. After teaching for so many years and developing a decent sense of my strengths and weaknesses in the classroom, this move took me out of my usual groove. It was easy to forget in the rush of the moment not to expect perfection, and not being able to interact with students in the same way created different emotions about the class sessions. Now that I have my balance back, things feel a bit more normal.

Part of what changed things for me was watching the videos I made of our class sessions. I quickly realized that these sessions are no worse than my usual classes! It may be harder for students to pay attention to the video screen for seventy-five minutes in the same way they might pay attention in the classroom, but my actual presentation isn't all that different. That was comforting, even as I saw that the videos aren't perfect.

Another thing that comforted me: the problems with my Zoom sessions are largely the same as the problems with my classroom sessions. I can fall into the habit of talking too much and too long unless I carefully design exercises and opportunities for students to take charge. The reduced interaction channel magnifies this problem slightly, but it doesn't create any new problems in principle. This, too, was comforting.

For example, I notice that some in-class exercises work better than others. I've always know this from my in-person course sessions, but our limited interaction bandwidth really exposes problems that are at the wrong level for where the students are at the moment (for me, usually too difficult, though occasionally too easy). I am also remembering the value of the right hint at the right moment and the value of students interacting and sharing with one another. Improving on these elements of my remote course should result in corresponding improvements when we move back to campus.

I have noticed one new problem: I tend to lose track of time more easily when working with the class in Zoom, which leads me to run short on time at the end of the period. In the classroom, I glance at a big analog clock on the wall at the back of the room and use that to manage my time. My laptop has a digital clock in the corner, but it doesn't seem to help me as much. I think this is a function of two parameters: First, the clock on my computer is less obtrusive, so I don't look at it as often. Second, it is a digital clock. I feel the geometry of analog time viscerally in a way that I don't with digital time. Maybe I'm just old, or maybe we all experience analog clocks in a more physical way.

I do think that watching my lectures can help me improve my teaching. After Week One, I wondered, "In what ways can going online, even for only a month and a half, improve my course and materials?" How might this experience make me a better teacher or lead to better online materials? I have often heard advice that I should record my lectures so that I could watch them with an experienced colleague, with an eye to identifying strengths to build on and weaknesses to improve on. Even without a colleague to help, this few weeks of recording gives me a library of sessions I can use for self-diagnosis and improvement.

Maybe this experience will have a few positives to counterbalance its obvious negatives.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

April 03, 2020 2:05 PM

Going Online, Two Weeks In

Earlier in the week I read this article by Jason Fried and circled these sentences:

Ultimately this major upheaval is an opportunity. This is a chance for your company, your teams, and individuals to learn a new skill. Working remotely is a skill.

After two weeks of the great COVID-19 school-from-home adventure, I very much believe that teaching remotely is a skill -- one I do not yet have.

Last week I shared a few thoughts about my first week teaching online. I've stopped thinking of this as "teaching online", though, because my course was not designed as an online course. It was pushed online, like so many courses everywhere, in a state of emergency. The result is a course that has been optimized over many years for face-to-face synchronous interaction being taken by students mostly asynchronously, without much face-to-face interaction.

My primary emotion after my second week teaching remotely is disappointment. Switching to this new mode of instruction was simple but not easy, at least not easy to do well. It was simple because I already have so much textual material available online, including detailed lecture notes. For students who can read the class notes, do exercises and homework problems, ask a few questions, and learn on their own, things seem to be going fine so far. (I'll know more as more work comes in for evaluation.) But approximately half of my students need more, and I have not figured out yet how best to serve them well.

I've now hosted four class sessions via Zoom for students who were available at class time and interested or motivated enough to show up. With the exception of one student, they all keep their video turned off, which offers me little or no visual feedback. Everyone keeps their audio turned off except when speaking, which is great for reducing the distraction of noises from everybody's homes and keyboards. The result, though, is an eerie silence that leaves me feeling as if I'm talking to myself in a big empty room. As I told my students on Thursday, it's a bit unnerving.

With so little perceptual stimulus, time seems to pass quickly, at least for me. It's easy to talk for far too long. I'm spared a bit by the fact that my classes intersperse short exposition by me with interactive work: I set them a problem, they work for a while, and then we debrief. This sort of session, though, requires the students to be engaged and up to date with their reading and homework. That's hard for me to expect of them under the best of conditions, let alone now when they are dispersed and learning to cope with new distractions.

After a few hours of trying to present material online, I very much believe that this activity requires skill and experience, of which I have little of either at this point. I have a lot of work to do. I hope to make Fried proud and use this as an opportunity to learn new skills.

I had expected by this point to have created more short videos that I could use to augment my lecture notes, for students who have no choice but to work on the course whenever they are free. Time has been in short supply, though, with everything on campus changing all at once. Perhaps if I can make a few more videos and flip the course a bit more, I will both serve those students better and find a path toward using our old class time better for the students who show up then and deserve a positive learning experience.

At level of nuts and bolts, I have already begun to learn some of the details of Panopto, Zoom, and our e-learning system. I like learning new tools, though the complications of learning them all at once and trying to use them at the same time makes me feel like a tech newbie. I guess that never changes.

The good news is that other parts of the remote work experience are going better, sometimes even well. Many administrative meetings work fine on Zoom, because they are mostly about people sharing information and reporting out. Most don't really need to be meetings anyway, and participating via Zoom is an improvement over gathering in a big room. As one administrator said at the end of one meeting recently, "This was the first council meeting online and maybe the shortest council meeting ever." I call that a success.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

March 27, 2020 2:31 PM

Going Online

It's been a long time since I've written here. In fits and starts over a couple of weeks, we went from "Wow, coronavirus is in the news a lot" to "all teaching is online through the end of summer, and Eugene has a lot of work to do". The new work included rearranging faculty candidates who were not allowed to fly to campus, looking to cover courses for a faculty who would be taking leave before the end of the semester, and -- of course -- moving my own course online.

This is new territory for me. I've never taught online, and I've never made videos for my students before. One bit of good news is that I have a lot of textual material online already: detailed lecture notes, exercises, homework assignments, every line of code we write or examine in class, all my exams, and a surprising amount of bonus material and extra reading. (Few students take advantage of the extra stuff, but I can't bring myself not provide links to it for the few who do.)

Over break, I made my first video, walking through a problem from our last homework assignment before break. I recorded three takes before having something I was willing to show students. It is rough and wordy, but not too bad for a first effort (well, v 1.03).

We now have a week of classes in the books. This is an unusual week in my course even under typical conditions. It consists a two-day in-class exercise on which students worked in groups of two or three to write a lexical addresser for a little language. With students segregated in their own apartments and hometowns, the intended group collaboration disappeared. Most of our students are still adjusting to the new learning conditions and figuring out how to take all of their courses online at home.

On Tuesday, I made one short video to review briefly the ideas about variable declarations and references that we had studied before break. After they watched the video, I turned them loose with my usual session notes, modified so that hints I would typically offer in class and document in a web page for later review hidden away on their own web pages. This way, students could work on the problem and only click on a hint when they needed a nudge. A few students showed up in a Zoom room and worked during the assigned class time, so I was able to interact with them in real time and answer questions. On Thursday, we re-oriented ourselves to the problem, and I live-coded a solution to the problem. I recorded the session for students to watch later, at their leisure.

I've already learned a bit, only one week in. The temptation to write a long piece of code and talk through the process is great. The students who braved Thursday's session let me know that they tuned out after a while, only to bring their attention back later, somewhat out of synch with my presentation. That's not much different from what happens in a long lecture, of course, but when I'm standing in a room with the students, I'm more likely to notice their boredom or distraction and step out of the lecture sooner. Talking to a few students in Zoom, especially when they have their video off, made it too easy to talk and talk without a break. Over many years of teaching face-to-face, I've learned how to mix class up a bit and not get into long ruts. I'll have to be more consciously aware of those lessons as I do demos over Zoom.

I'm still a little nervous about the rest of the semester, but also a little excited. In what ways can going online, even for only a month and a half, improve my course and materials? Putting session notes up over the years has already forced me to be more careful in how I write explanations and more detailed in the code I post. Now, with little or no face-to-face interaction (most students will come to a session after it ends), how will I need to improve my presentation and the materials I provide? How much will my inexperience making videos limit the level of quality I can achieve over six weeks?

For now, I plan to keep things as simple as possible. I'm not sure what technology my students have available to them at home, or the conditions under which they must work. I am still using email and my course website (a very simple static site that would feel at home in 2000) as my main ways to present material, and have added Zoom as a to interact with students in lieu of regular class time. I will make a few short video to demonstrate ideas, to augment the written material and give students another sensory input with which to learn. I don't expect the videos to be great, or even good, but I hope they help my students. I also hope that, with practice, I get better at making them.

Sadly, I won't be in a classroom with this group of students again this year. I'll miss that. They are good people, and I enjoy working with them. This is a strange way to end a school year, and a reminder of how fortunate I am to get to teach.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 29, 2020 11:19 AM

Programming Healthily

... or at least differently. From Inside Google's Efforts to Engineer Its Food for Healthiness:

So, instead of merely changing the food, Bakker changed the foodscape, ensuring that nearly every option at Google is healthy -- or at least healthyish -- and that the options that weren't stayed out of sight and out of mind.

This is how I've been thinking lately about teaching functional programming to my students, who have experience in procedural Python and object-oriented Java. As with deciding what we eat, how we think about problems and programs is mostly unconscious, a mixture of habit and culture. It is also something intimate and individual, perhaps especially for relative beginners who have only recently begun to know a language well enough to solve problems with some facility. But even for us old hands, the languages and mental tools we use to write programs can become personal. That makes change, and growth, hard.

In my last few offerings of our programming languages course, in which students learn Racket and functional style, I've been trying to change the programming landscape my students see in small ways whenever I can. Here are a few things I've done:

  • I've shrunk the set of primitive functions that we use in the first few weeks. A larger set of tools offers more power and reach, but it also can distract. I'm hoping that students can more quickly master the small set of tools than the larger set and thus begin to feel more accomplished sooner.

  • We work for the first few weeks on problems with less need for the tools we've moved out of sight, such as local variables and counter variables. If a program doesn't really benefit from a local variable, students will be less likely to reach for one. Instead, perhaps, they will reach for a function that can help them get closer to their goal.

  • In a similar vein, I've tried to make explicit connections between specific triggers ("We're processing a list of data, so we'll need a loop.") with new responses ("map can help us here."). We then follow up these connections with immediate and frequent practice.

By keeping functional tools closer at hand, I'm trying to make it easier for these tools to become the new habit. I've also noticed how the way I speak about problems and problem solving can subtly shape how students approach problems, so I'm trying to change a few of my own habits, too.

It's hard for me to do all these things, but it's harder still when I'm not thinking about them. This feels like progress.

So far students seem to be responding well, but it will be a while before I feel confident that these changes in the course are really helping students. I don't want to displace procedural or object-oriented thinking from their minds, but rather to give them a new tool set they can bring out when they need or want to.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 23, 2020 1:14 PM

One Way I'm Like Tom Waits

At one point in this conversation between Elvis Costello and Tom Waits, the two songwriters discuss some of the difficulties they face in the creative process. Costello remarks that he sometimes writes notes that he can't sing, only to be disappoint himself when he gets into the studio. Waits commiserates:

Anything that has to travel all the way down from your cerebellum to your fingertips, there's a lot of things that can happen on the journey. Sometimes I'll listen to records, my own stuff, and I think god, the original idea for this was so much better than the mutation that we arrived at. What I'm trying to do now is get what comes and keep it alive. It's like carrying water in your hands. I want to keep it all, and sometimes by the time you get to the studio you have nothing.

This is something I notice all the time when I'm teaching. I'll have an idea for class somewhere: walking home, riding the exercise bike, reading something. My brain dives into the process of making the idea real: telling a story, writing code, explaining an idea in relation to things we've done in class before. Then comes deployment. We get into the classroom and... it feels flat. On rare occasions the new ideas bombs, but most often it just seems not quite right. It doesn't feel like what I had in my mind when I first had the idea, and I'm not sure how we ended up feeling the way we feel in class. The idea was so perfect.

Teaching ideas are abstraction. Teaching is concrete. It involves other humans. Their learning is what's important, and sometimes the idea doesn't make the connection intended by the teacher. The classroom is where reality sets in.

There is good news. Sometimes a disappointing or failed idea can be salvaged by analyzing the experience and redesigning the session. But most of the time it never feels as perfect as it did in my head at the moment of conception. Part of the art of becoming a teaching is making friends with this fact and moving forward.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 18, 2020 4:00 PM

Programming Feels Like Home

I saw Robin Sloan's An App Can Be a Home-Cooked Meal floating around Twitter a few days back. It really is quite good; give it a read if you haven't already. This passage captures a lot of the essay's spirit in only a few words:

The exhortation "learn to code!" has its foundations in market value. "Learn to code" is suggested as a way up, a way out. "Learn to code" offers economic leverage, a squirt of power. "Learn to code" goes on your resume.
But let's substitute a different phrase: "learn to cook." People don't only learn to cook so they can become chefs. Some do! But far more people learn to cook so they can eat better, or more affordably, or in a specific way. Or because they want to carry on a tradition. Sometimes they learn just because they're bored! Or even because -- get this -- they love spending time with the person who's teaching them.

Sloan expresses better than I ever have an idea that I blog about every so often. Why should people learn to program? Certainly it offers a path to economic gain, and that's why a lot of students study computer science in college, whether as a major, a minor, or a high-leverage class or two. There is nothing wrong with that. It is for many a way up, a way out.

But for some of us, there is more than money in programming. It gives you a certain power over the data and tools you use. I write here occasionally about how a small script or a relatively small program makes my life so much easier, and I feel bad for colleagues who are stuck doing drudge work that I jump past. Occasionally I'll try to share my code, to lighten someone else's burden, but most of the time there is such a mismatch between the worlds we live in that they are happier to keep plugging along. I can't say that I blame them. Still, if only they could program and used tools that enabled them to improve their work environments...

But... There is more still. From the early days of this blog, I've been open with you all:

Here's the thing. I like to write code.

One of the things that students like about my classes is that I love what I do, and they are welcome to join me on the journey. Just today a student in my Programming Languages drifted back to my office with me after class , where we ended up talking for half an hour and sketching code on a whiteboard as we deconstructed a vocabulary choice he made on our latest homework assignment. I could sense this student's own love of programming, and it raised my spirits. It makes me more excited for the rest of the semester.

I've had people come up to me at conferences to say that the reason they read my blog is because they like to see someone enjoying programming as much as they do. many of them share links with their students as if to say, "See, we are not alone." I look forward to days when I will be able to write in this vein more often.

Sloan reminds us that programming can be -- is -- more than a line on a resume. It is something that everyone can do, and want to do, for a lot of different reasons. It would be great if programming "were marbled deeply into domesticity and comfort, nerdiness and curiosity, health and love" in the way that cooking is. That is what makes Computing for All really worth doing.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Software Development, Teaching and Learning

January 22, 2020 3:54 PM

The Roots of TDD -- from 1957

In 1957, Dan McCracken published Digital Computer Programming, perhaps the first book on the new art of programming. His book shows that the roots of extreme programming run deep. In this passage, McCracken encourages both the writing of tests before the writing of code and the involvement of the customer in the software development process:

The first attack on the checkout problem may be made before coding is begun. In order to fully ascertain the accuracy of the answers, it is necessary to have a hand-calculated check case with which to compare the answers which will later be calculated by the machine. This means that stored program machines are never used for a true one-shot problem. There must always be an element of iteration to make it pay. The hand calculations can be done at any point during programming. Frequently, however, computers are operated by computing experts to prepare the problems as a service for engineers or scientists. In these cases it is highly desirable that the "customer" prepare the check case, largely because logical errors and misunderstandings between the programmer and customer may be pointed out by such procedure. If the customer is to prepare the test solution is best for him to start well in advance of actual checkout, since for any sizable problem it will take several days or weeks to calculate the test.

I don't have a copy of this book, but I've read a couple of other early books by McCracken, including one of his Fortran books for engineers and scientists. He was a good writer and teacher.

I had the great fortune to meet Dan at an NSF workshop in Clemson, South Carolina, back in the mid-1990s. We spent many hours in the evening talking shop and watching basketball on TV. (Dan was cheering his New York Knicks on in the NBA finals, and he was happy to learn that I had been a Knicks and Walt Frazier fan in the 1970s.) He was a pioneer of programming and programming education who was willing to share his experience with a young CS prof who was trying to figure out how to teach. We kept in touch by email thereafter. It was honor to call him a friend.

You can find the above quotation in A History of Test-Driven Development (TDD), as Told in Quotes, by Rob Myers. That post includes several good quotes that Myers had to cut from his upcoming book on TDD. "Of course. How else could you program?"


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

January 10, 2020 2:25 PM

Questions

I recently read an interview with documentary filmmaker Errol Morris, who has unorthodox opinions about documentaries and how to make them. In particular, he prefers to build his films out of extended interviews with a single subject. These interviews give him all the source material he needs, because they aren't about questions and answers. They are about stories:

First of all, I think all questions are more or less rhetorical questions. No one wants their questions answered. They just want to state their question. And, in answering the question, the person never wants to answer the question. They just want to talk.

Morris isn't asking questions; he is stating them. His subjects are not answering questions; they are simply talking.

(Think about this the next time your listening to an interview with a politician or candidate for office...)

At first, I was attracted to the sentiment in this paragraph. Then I became disillusioned with what I took to be its cynicism. Now, though, after a week or so, I am again enamored with its insight. How many of the questions I ask of software clients and professional colleagues are really statements of a position? How many of their answers are disconnected from the essential element of my questions? Even when these responses are disconnected, they communicate a lot to me, if only I listen. My clients and colleagues are often telling me exactly what they want me to know. This dynamic is present surprisingly often when I work with students at the university, too. I need to listen carefully when students don't seem to be answering my question. Sometimes it's because they have misinterpreted the question, and I need to ask differently. Sometimes it's because they are telling me what they want me to know, irrespective of the question.

And when my questions aren't really questions, but statements or some other speech act... well, I know I have some work to do.

In case you find Morris's view on interviews cynical and would prefer to ponder the new year with greater hope, I'll leave you with a more ambiguous quote about questions:

There are years that ask questions, and years that answer.

That's from Their Eyes Were Watching God, by Zora Neale Hurston. In hindsight, it may be true or false for any given year. As a way to frame the coming months, though, it may be useful.

I hope that 2020 brings you the answers you seek, or the questions.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 06, 2020 3:13 PM

A Writing Game

I recently started reading posts in the archives of Jason Zweig's blog. He writes about finance for a living but blogs more widely, including quite a bit about writing itself. An article called On Writing Better: Sharpening Your Tools challenges writers to look at each word they write as "an alien object":

As the great Viennese journalist Karl Kraus wrote, "The closer one looks at a word, the farther away it moves." Your goal should be to treat every word you write as an alien object: You should be able to look at it and say, What is that doing here? Why did I use that word instead of a better one? What am I trying to say here? How can I get to where I'm going if I use such stale and lifeless words?

My mind immediately turned this into a writing game, an exercise that puts the idea into practice. Take any piece of writing.

  1. Choose a random word in the document.
  2. Change the word -- or delete it! -- in a way that improves the text.
  3. Go to 1.

Play the game for a fixed number of rounds or for a fixed period of time. A devilish alternative is to play until you get so frustrated with your writing that you can't continue. You could then judge your maturity as a writer by how long you can play in good spirits.

We could even automate the mechanics of the game by writing a program that chooses a random word in a document for us. Every time we save the document after a change, it jumps to a new word.

As with most first ideas, this one can probablyb be improved. Perhaps we should bias word selection toward words whose replacement or deletion are most likely to improve our writing. Changing "the" or "to" doesn't offer the same payoff as changing a lazy verb or deleting an abstract adverb. Or does it? I have a lot of room to improve as a writer; maybe fixing some "the"s and "to"s is exactly what I need to do. The Three Bears pattern suggests that we might learn something by tackling the extreme form of the challenge and seeing where it leads us.

Changing or deleting a single word can improve a piece of text, but there is bigger payoff available, if we consider the selected word in context. The best way to eliminate many vague nouns is to turn them back into verbs, where they act with vigor. To do that, we will have to change the structure of the sentence, and maybe the surrounding sentences. That forces us to think even more deeply about the text than changing a lone word. It also creates more words for us to fix in following rounds!

I like programming challenges of this sort. A writing challenge that constrains me in arbitrary ways might be just what I need to take time more often to improved my work. It might help me identify and break some bad habits along the way. Maybe I'll give this a try and report back. If you try it, please let me know the results!

And no, I did not play the game with this post. It can surely be improved.

Postscript. After drafting this post, I came across another article by Zweig that proposes just such a challenge for the narrower case of abstract adverbs:

The only way to see if a word is indispensable is to eliminate it and see whether you miss it. Try this exercise yourself:
  • Take any sentence containing "actually" or "literally" or any other abstract adverb, written by anyone ever.
  • Delete that adverb.
  • See if the sentence loses one iota of force or meaning.
  • I'd be amazed if it does (if so, please let me know).

We can specialize the writing game to focus on adverbs, another part of speech, or almost any writing weakness. The possibilities...


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

December 20, 2019 1:45 PM

More Adventures in Constrained Programming: Elo Predictions

I like tennis. The Tennis Abstract blog helps me keep up with the game and indulge my love of sports stats at the same time. An entry earlier this month gave a gentle introduction to Elo ratings as they are used for professional tennis:

One of the main purposes of any rating system is to predict the outcome of matches--something that Elo does better than most others, including the ATP and WTA rankings. The only input necessary to make a prediction is the difference between two players' ratings, which you can then plug into the following formula:

1 - (1 / (1 + (10 ^ (difference / 400))))

This formula always makes me smile. The first computer program I ever wrote because I really wanted to was a program to compute Elo ratings for my high school chess club. Over the years I've come back to Elo ratings occasionally whenever I had an itch to dabble in a new language or even an old favorite. It's like a personal kata of variable scope.

I read the Tennis Abstract piece this week as my students were finishing up their compilers for the semester and as I was beginning to think of break. Playful me wondered how I might implement the prediction formula in my students' source language. It is a simple functional language with only two data types, integers and booleans; it has no loops, no local variables, no assignments statements, and no sequences. In another old post, I referred to this sort of language as akin to an integer assembly language. And, heaven help me, I love to program in integer assembly language.

To compute even this simple formula in Klein, I need to think in terms of fractions. The only division operator performs integer division, so 1/x for any x gives 0. I also need to think carefully about how to implement the exponentiation 10 ^ (difference / 400). The difference between two players' ratings is usually less than 400 and, in any case, almost never divisible by 400. So My program will have to take an arbitrary root of 10.

Which root? Well, I can use our gcd() function (implemented using Euclid's algorithm, of course) to reduce diff/400 to its lowest terms, n/d, and then compute the dth root of 10^n. Now, how to take the dth root of an integer for an arbitrary integer d?

Fortunately, my students and I have written code like this in various integer assembly languages over the years. For instance, we have a SQRT function that uses binary search to hone in on the integer closest to the square of a given integer. Even better, one semester a student implemented a square root program that uses Newton's method:

   xn+1 = xn - f(xn)/f'(xn)

That's just what I need! I can create a more general version of the function that uses Newton's method to compute an arbitrary root of an arbitrary base. Rather than work with floating-point numbers, I will implement the function to take its guess as a fraction, represented as two integers: a numerator and a denominator.

This may seem like a lot of work, but that's what working in such a simple programming language is like. If I want my students' compilers to produce assembly language that predicts the result of a professional tennis match, I have to do the work.

This morning, I read a review of Francis Su's new popular math book, Mathematics for Human Flourishing. It reminds us that math isn't about rules and formulas:

Real math is a quest driven by curiosity and wonder. It requires creativity, aesthetic sensibilities, a penchant for mystery, and courage in the face of the unknown.

Writing my Elo rating program in Klein doesn't involve much mystery, and it requires no courage at all. It does, however, require some creativity to program under the severe constraints of a very simple language. And it's very much true that my little programming diversion is driven by curiosity and wonder. It's fun to explore ideas in a small space uses limited tools. What will I find along the way? I'll surely make design choices that reflect my personal aesthetic sensibilities as well as the pragmatic sensibilities of a virtual machine that know only integers and booleans.

As I've said before, I love to program.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 12, 2019 3:20 PM

A Semester's Worth of Technical Debt

I've been thinking a lot about technical debt this week. My student teams are in Week 13 of their compiler project and getting ready to submit their final systems. They've been living with their code long enough now to appreciate Ward Cunningham's original idea of technical debt: the distance between their understanding and the understanding embodied in their systems.

... it was important to me that we accumulate the learnings we did about the application over time by modifying the program to look as if we had known what we were doing all along and to look as if it had been easy to do in Smalltalk.

Actually, I think they are experiencing two gulfs: one relates to their understanding of the domain, compilers, and the other to their understanding of how to build software. Obviously, they are just learning about compilers and how to build one, so their content knowledge has grown rapidly while they've been programming. But they are also novice software engineers. They are really just learning how to build a big software system of any kind. Their knowledge of project management, communication, and their tools has also grown rapidly in parallel with building their system.

They have learned a lot about both content and process this semester. Several of them wish they had time to refactor -- to pay back the debt they accumulated honestly along the way -- but university courses have to end. Perhaps one or two of them will do what one or two students in most past semesters have done: put some of their own time into their compilers after the course ends, to see if they can modify the program to look as if they had known what they were doing all along, and to look as if it had been easy to do in Java or Python. There's a lot of satisfaction to be found at the end of that path, if they have the time and curiosity to take it.

One team leader tried to bridge the gulf in real time over the last couple of weeks: He was so unhappy with the gap between his team's code and their understanding that he did a complete rewrite of the first five stages of their compiler. This team learned what every team learns about rewrites and large-scale refactoring: they spend time that could otherwise have been spent on new development. In a time-boxed course, this doesn't always work well. That said, though, they will likely end up with a solid compiler -- as well as a lot of new knowledge about how to build software.

Being a prof is fun in many ways. One is getting to watch students learn how to build something while they are building it, and coming out on the other side with new code and new understanding.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 04, 2019 2:42 PM

Make Your Code Say What You Say When You Describe It

Brian Marick recently retweeted this old tweet from Ron Jeffries:

You: Explain this code to me, please.
They: blah blah blah.
You: Show me where the code says that.
They: <silence>
You: Let's make it say that.

I find this strategy quite helpful when writing my own code. If I can't explain any bit of code to myself clearly and succinctly, then I can take a step back and work on fixing my understanding before trying to fix the code. Once I understand, I'm a big fan of creating functions or methods whose names convey their meaning.

This is also a really handy strategy for me in my teaching. As a prof, I spend a fair amount of time explaining code I've written to students. The act of explaining a piece of code, whether written or spoken, often points me toward ways I can make the program better. If I find myself explaining the same piece of code to several students over time, I know the code can probably be better. So I try to fix it.

I also use a gentler variation of Jeffries' approach when working directly with students and their code. I try whenever I can to help my students learn how to write better programs. It can be tempting to start lecturing them on ways that their program could be better, but unsolicited advice of this sort rarely finds a happy place to land in their memory. Asking questions can be more effective, because questions can lead to a conversation in which students figure some things out on their own. Asking general questions usually isn't helpful, though, because students may not have enough experience to connect the general idea to the details of their program.

So: I find it helpful to ask a student to explain their code to me. Often they'll give me a beautiful answer, short and clear, that stands in obvious contrast to the code we are looking at out. This discrepancy leads to a natural follow-up question: How might we change the code so that it says that? The student can take the lead in improving their own programs, guided by me with bits of experience they haven't had yet.

Of course, sometimes the student's answer is complex or rambles off into silence. That's a cue to both of us that they don't really understand yet what they are trying to do. We can take a step back and help them fix their understanding -- of the problem or of the programming technique -- before trying to fix the code itself.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 02, 2019 11:41 AM

XP as a Long-Term Learning Strategy

I recently read Anne-Laure Le Cunff's Interleaving: Rethink The Way You Learn. Le Cunff explains why interleaving -- "the process of mixing the practice of several related skills together" -- is more effective for long-term learning than blocked practice, in which students practice a single skill until they learn it and then move on to the next skill. Interleaving forces the brain to retrieve different problem-solving strategies more frequently and under different circumstances, which reinforces neural connections and improves learning.

To illustrate the distinction between interleaving and blocked practice, Le Cunff uses this image:

interleaving versus blocked practice

When I saw that diagram, I thought immediately of Extreme Programming. In particular, I thought of a diagram I once saw that distinguished XP from more traditional ways of building software in terms of how quickly it moved through the steps of the development life cycle. That image looked something like this:

XP interleaves the stages of the software development life cycle

If design is good, why not do it all the time? If testing is good, why not do it all the time, too?

I don't think that the similarity between these two images is an accident. It reflects one of XP's most important, if sometimes underappreciated, benefits: By interleaving short spurts of analysis, design, implementation, and testing, programmers strengthen their understanding of both the problem and the evolving code base. They develop stronger long-term memory associations with all phases of the project. Improved learning enables them to perform even more effectively deeper in the project, when these associations are more firmly in place.

Le Cunff offers a caveat to interleaved learning that also applies to XP: "Because the way it works benefits mostly our long-term retention, interleaving doesn't have the best immediate results." The benefits of XP, including more effective learning, accrue to teams that persist. Teams new to XP are sometimes frustrated by the small steps and seemingly narrow focus of their decisions. With a bit more experience, they become comfortable with the rhythm of development and see that their code base is more supple. They also begin to benefit from the more effective learning that interleaved practice provides.

~~~~

Image 1: This image comes from Le Cunff's article, linked above. It is by Anne-Laure Le Cunff, copyright Ness Labs 2019, and reproduced here with permission.

Image 2: I don't remember where I saw the image I hold in my memory, and a quick search through Extreme Programming Explained and Google's archives did not turn up anything like it. So I made my own. It is licensed CC BY-SA 4.0.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 10, 2019 11:06 AM

Three of the Hundred Falsehoods CS Students Believe

Jan Schauma recently posted a list of one hundred Falsehoods CS Students (Still) Believe Upon Graduating. There is much good fun here, especially for a prof who tries to help CS students get ready for the world, and a fair amount of truth, too. I will limit my brief comments to three items that have been on my mind recently even before reading this list.

18. 'Email' and 'Gmail' are synonymous.

CS grads are users, too, and their use of Gmail, and systems modeled after it, contributes to the truths of modern email: top posting all the time, with never a thought of trimming anything. Two-line messages sitting atop icebergs of text which will never be read again, only stored in the seemingly infinite space given us for free.

Of course, some of our grads end up in corporate and IT, managing email as merely one tool in a suite of lowest-common-denominator tools for corporate communication. The idea of email as a stream of text that can, for the most part, be read as such, is gone -- let alone the idea that a mail stream can be processed by programs such as procmail to great benefit.

I realize that most users don't ask for anything more than a simple Gmail filter to manage their mail experience, but I really wish it were easier for more users with programming skills to put those skills to good use. Alas, that does not fit into the corporate IT model, and not even the CS grads running many of these IT operations realize or care what is possible.

38. Employers care about which courses they took.

It's the time of year when students register for spring semester courses, so I've been meeting with a lot of students. (Twice as many as usual, covering for a colleague on sabbatical.) It's interesting to encounter students on both ends of the continuum between not caring at all what courses they take and caring a bit too much. The former are so incurious I wonder how they fell into the major at all. The latter are often more curious but sometimes are captive to the idea that they must, must, must take a specific course, even if it meets at a time they can't attend or is full by the time they register.

I do my best to help them get into these courses, either this spring or in a late semester, but I also try to do a little teaching along the way. Students will learn useful and important things in just about every course they take, if they want to, and taking any particular course does not have to be either the beginning or the end of their learning of that topic. And if the reason they think they must take a particular course is because future employers will care, they are going to be surprised. Most of the employers who interview our students are looking for well-rounded CS grads who have a solid foundation in the discipline and who can learn new things as needed.

90. Two people with a CS degree will have a very similar background and shared experience/knowledge.

This falsehood operates in a similar space to #38, but at the global level I reached at the end of my previous paragraph. Even students who take most of the same courses together will usually end their four years in the program with very different knowledge and experiences. Students connect with different things in each course, and these idiosyncratic memories build on one another in subsequent courses. They participate in different extracurricular activities and work different part-time jobs, both of shape and augment what they learn in class.

In the course of advising students over two, three, or four years, I try to help them see that their studies and other experiences are helping them to become interesting people who know more than they realize and who are individuals, different in some respects from all their classmates. They will be able to present themselves to future employers in ways that distinguish them from everyone else. That's often the key to getting the job they desire now, or perhaps one they didn't even realize they were preparing for while exploring new ideas and building their skillsets.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 27, 2019 10:23 AM

Making Something That Is Part Of Who You Are

The narrator in Rachel Cusk's "Transit" relates a story told to her by Pavel, the Polish builder who is helping to renovate her flat. Pavel left Poland for London to make money after falling out with his father, a builder for whom he worked. The event that prompted his departure was a reaction to a reaction. Pavel had designed and built a home for his family. After finishing, he showed it to his father. His father didn't like it, and said so. Pavel chose to leave at that moment.

'All my life,' he said, 'he criticise. He criticise my work, my idea, he say he don't like the way I talk -- even he criticise my wife and my children. But when he criticise my house' -- Pavel pursed his lips in a smile -- 'then I think, okay, is enough.'

I generally try to separate myself from the code and prose I write. Such distance is good for the soul, which does not need to be buffeted by criticism, whether external or internal, of the things I've created. It is also good for the work itself, which is free to be changed without being anchored to my identity.

Fortunately, I came out of home and school with a decent sense that I could be proud of the things I create without conflating the work with who I am. Participating in writers' workshops at PLoP conferences early in my career taught me some new tools for hearing feedback objectively and focusing on the work. Those same tools help me to give feedback better. I use them in an effort to help my students develop as people, writers and programmers independent of the code and prose they write.

Sometimes, though, we make things that are expressions of ourselves. They carry part of us in their words, in what they say to the world and how they say it. Pavel's house is such a creation. He made everything: the floors, the doors, and the roof; even the beds his children slept in. His father had criticized his work, his ideas, his family before. But criticizing the house he had dreamed and built -- that was enough. Cusk doesn't give the reader a sense that this criticism was a last straw; it was, in a very real way, the only straw that mattered.

I think there are people in this world who would like just once in their lives to make something that is so much a part of who they are that they feel about it as Pavel does his house. They wish to do so despite, or perhaps because of, the sharp line it would draw through the center of life.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

August 30, 2019 4:26 PM

Unknown Knowns and Explanation-Based Learning

Like me, you probably see references to this classic quote from Donald Rumsfeld all the time:

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know.

I recently ran across it again in an old Epsilon Theory post that uses it to frame the difference between decision making under risk (the known unknowns) and decision-making under uncertainty (the unknown unknowns). It's a good read.

Seeing the passage again for the umpteenth time, it occurred to me that no one ever seems to talk about the fourth quadrant in that grid: the unknown knowns. A quick web search turns up a few articles such as this one, which consider unknown knowns from the perspective of others in a community: maybe there are other people who know something that you do not. But my curiosity was focused on the first-person perspective that Rumsfeld was implying. As a knower, what does it mean for something to be an unknown known?

My first thought was that this combination might not be all that useful in the real world, such as the investing context that Ben Hunt writes about in Epsilon Theory. Perhaps it doesn't make any sense to think about things you don't know that you know.

As a student of AI, though, I suddenly made an odd connection ... to explanation-based learning. As I described in a blog post twelve years ago:

Back when I taught Artificial Intelligence every year, I used to relate a story from Russell and Norvig when talking about the role knowledge plays in how an agent can learn. Here is the quote that was my inspiration, from Pages 687-688 of their 2nd edition:

Sometimes one leaps to general conclusions after only one observation. Gary Larson once drew a cartoon in which a bespectacled caveman, Zog, is roasting his lizard on the end of a pointed stick. He is watched by an amazed crowd of his less intellectual contemporaries, who have been using their bare hands to hold their victuals over the fire. This enlightening experience is enough to convince the watchers of a general principle of painless cooking.

I continued to use this story long after I had moved on from this textbook, because it is a wonderful example of explanation-based learning.

In a mathematical sense, explanation-based learning isn't learning at all. The new fact that the program learns follows directly from other facts and inference rules already in its database. In EBL, the program constructs a proof of a new fact and adds the fact to its database, so that it is ready-at-hand the next time it needs it. The program has compiled a new fact, but in principle it doesn't know anything more than it did before, because it could always have deduced that fact from things it already knows.

As I read the Epsilon Theory article, it struck me that EBL helps a learner to surface unknown knowns by using specific experiences as triggers to combine knowledge it already into a piece of knowledge that is usable immediately without having to repeat the (perhaps costly) chain of inference ever again. Deducing deep truths every time you need them can indeed be quite costly, as anyone who has ever looked at the complexity of search in logical inference systems can tell you.

When I begin to think about unknown knowns in this way, perhaps it does make sense in some real-world scenarios to think about things you don't know you know. If I can figure it all out, maybe I can finally make my fortune in the stock market.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

August 25, 2019 10:00 AM

Learn the Basics, Struggle a Bit, Then Ask Questions

Earlier this week, there was a meme on Twitter where people gave one-line advice to young students as they stepped onto a college campus as first-years, to help them enjoy and benefit from their college years. I didn't have anything clever or new to say, so I didn't join in, but something I read this morning triggered a bit of old wisdom that I wish more students would try to live out. In tweet-size form, it might be: "Learn the basics, struggle a bit, then ask questions." Here's the blog-size version.

In Tyler Cowen's conversation with Google economist Hal Varian, Cowen asks about a piece of advice Varian had once given to graduate students: "Don't look at the literature too soon." Is that still good advice, and why? Yes, Varian replied...

VARIAN: Because if you look at the literature, you'll see this completely worked-out problem, and you'll be captured by that person's viewpoint. Whereas, if you flounder around a little bit yourself, who knows? You might come across a completely different phenomenon. Now, you do have to look at the literature. I want to emphasize that. But it's a good idea to wrestle with a problem a little bit on your own before you adopt the standard viewpoint.

Grad students are often trying to create new knowledge, so it's best for them not to lock themselves into existing ways of thinking too soon. Thus: Don't look at the literature too soon.

I work mostly with undergrads, who study in a different context than grad students. But I think that the core of Varian's advice works well for undergrads, too: Start by learning a few basic ideas in class. Then try to solve problems. Then ask questions.

Undergrads are usually trying to master foundational material, not create new knowledge, so it's tempting to want to jump straight to answers. But it's still to valuable approach the task of learning as a process of building one's own understanding of problems before seeking answers. Banging on a bunch of problems helps us to build instincts about what the important issues and to explore the fuzzy perimeter between the basics and the open questions that will vex us after we master them. That happens best when we don't see a solution right away, when what we learned in class doesn't seem to point us directly to a solution and we have to find our own way.

But do ask questions! A common theme among students who struggle in my courses is the belief they just have to work harder or longer on a problem. Too many times I've had a student tell me "I spent an hour on each of the five homework problems." Really? My goal is for each problem to take 15 minutes or less. After half an hour, or maybe a second attempt the next day, maybe you are missing something small but important. Ask a question; maybe a little nudge can put you on the right track. Sometimes, your question will help me realize that it's the problem which is flawed and needs a tweak!

Back at the beginning of the process, too strong a belief in the ability to figure things out on one's own creates a different sort of breakdown in the learning process: It can be tempting to skip over what you read in your textbook and what you learn in class, and start trying to solving problems. "It's basic material, right? I'll figure it out." You might, but that's taking the idea to an unhealthy level. There's a difference between seeking answers too soon and trying to solve problems without the basic tools you need. Trust your profs a little bit... In class, they are trying to give you the basic tools you need to solve interesting problems.

There's nothing new here. But let's be honest; there isn't much new to be found in ways to learn. Even in the digital age, the basic tenets remain true. That's why I extol curiosity and persistence and why I'd rather be Mr. Miyagi than an answer machine. Learning will be uncomfortable. The trick is to find a way to balance the curiosity with the discomfort, the not-knowing with the receiving of answers and moving on. I wish I had great advice for how to find that balance, but I think people ultimately have to do that for themselves. We benefit by being part of a community of learners, but we each learn in our own ways and on our own time.

Actually, writing up this post has led me to a goal for myself as a teacher this year, and which may be good advice for my fellow teachers: Be more explicit about my expectations of students. This is true both at the micro-level of, say, how much time to spend on homework problems before seeking help, and at the macro-level of how to approach learning. If I want students to do something, I should at least remove the barriers between what they are thinking they should do and what I would like for them to do.

So there's some advice for students and some advice for teachers. Let's enjoy the new year and learn together.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 02, 2019 2:48 PM

Programming is an Infinite Construction Kit

As so often, Marvin Minsky loved to tell us about the beauty of programming. Kids love to play with construction sets like Legos, TinkerToys, and Erector sets. Programming provides an infinite construction kit: you never run out of parts!

In the linked essay, which was published as a preface to a 1986 book about Logo, Minsky tells several stories. One of the stories relates that once, as a small child, he built a large tower out of TinkerToys. The grownups who saw it were "terribly impressed". He inferred from their reaction that:

some adults just can't understand how you can build whatever you want, so long as you don't run out of sticks and spools.

Kids get it, though. Why do so many of us grow out of this simple understanding as we get older? Whatever its cause, this gap between children's imaginations and the imaginations of adults around them creates a new sort of problem when we give the children a programming language such as Logo or Scratch. Many kids take to these languages just as they do to Legos and TinkerToys: they're off to the races making things, limited only by their expansive imaginations. The memory on today's computers is so large that children never run out of raw material for writing programs. But adults often don't possess the vocabulary for talking with the children about their creations!

... many adults just don't have words to talk about such things -- and maybe, no procedures in their heads to help them think of them. They just do not know what to think when little kids converse about "representations" and "simulations" and "recursive procedures". Be tolerant. Adults have enough problems of their own.

Minsky thinks there are a few key ideas that everyone should know about computation. He highlights two:

Computer programs are societies. Making a big computer program is putting together little programs.

Any computer can be programmed to do anything that any other computer can do--or that any other kind of "society of processes" can do.

He explains the second using ideas pioneered by Alan Turing and long championed in the popular sphere by Douglas Hofstadter. Check out this blog post, which reflects on a talk Hofstadter gave at my university celebrating the Turing centennial.

The inability of even educated adults to appreciate computing is a symptom of a more general problem. As Minsky says toward the end of his essay, People who don't appreciate how simple things can grow into entire worlds are missing something important. If you don't understand how simple things can grow into complex systems, it's hard to understand much at all about modern science, including how quantum mechanics accounts for what we see in the world and even how evolution works.

You can usually do well by reading Minsky; this essay is a fine example of that. It comes linked to an afterword written by Alan Kay, another computer scientist with a lot to say about both the beauty of computing and its essential role in a modern understanding of the world. Check both out.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

June 20, 2019 3:51 PM

Implementing a "Read Lines" Operator in Joy

I wasn't getting any work done today on my to-do list, so I decided to write some code.

One of my learning exercises to open the Summer of Joy is to solve the term frequency problem from Crista Lopes's Exercises in Programming Style. Joy is a little like Scheme: it has a lot of cool operations, especially higher-order operators, but it doesn't have much in the way of practical level tools for basic tasks like I/O. To compute term frequencies on an arbitrary file, I need to read the file onto Joy's stack.

I played around with Joy's low-level I/O operators for a while and built a new operator called readfile, which expects the pathname for an input file on top of the stack:

    DEFINE readfile ==
        (* 1 *)  [] swap "r" fopen
        (* 2 *)  [feof not] [fgets swap swonsd] while
        (* 3 *)  fclose.

The first line leaves an empty list and an input stream object on the stack. Line 2 reads lines from the file and conses them onto the list until it reaches EOF, leaving a list of lines under the input stream object on the stack. The last line closes the stream and pops it from the stack.

This may not seem like a big deal, but I was beaming when I got it working. First of all, this is my first while in Joy, which requires two quoted programs. Second, and more meaningful to me, the loop body not only works in terms of the dip idiom I mentioned in my previous post, it even uses the higher-order swonsd operator to implement the idiom. This must be how I felt the first time I mapped an anonymous lambda over a list in Scheme.

readfile leaves a list of lines on the stack. Unfortunately, the list is in reverse order: the last line of the file is the front of the list. Besides, given that Joy is a stack-based language, I think I'd like to have the lines on the stack itself. So I noodled around some more and implemented the operator pushlist:

    DEFINE pushlist ==
        (* 1 *)  [ null not ] [ uncons ] while
        (* 2 *)  pop.

Look at me... I get one loop working, so I write another. The loop on Line 1 iterates over a list, repeatedly taking (head . tail) and pushing head and tail onto the stack in that order. Line 2 pops the empty list after the loop terminates. The result is a stack with the lines from the file in order, first line on top:

    line-n ... line-3 line-2 line-1

Put readfile and pushlist together:

    DEFINE fileToStack == readfile pushlist.
and you get fileToStack, something like Python's readlines() function, but in the spirit of Joy: the file's lines are on the stack ready to be processed.

I'll admit that I'm pleased with myself, but I suspect that this code can be improved. Joy has a lot of dandy higher-order operators. There is probably a better way to implement pushlist and maybe even readfile. I won't be surprised if there is a more idiomatic way to implement the two that makes the two operations plug together with less rework. And I may find that I don't want to leave bare lines of text on the stack after all and would prefer having a list of lines. Learning whether I can improve the code, and how, are tasks for another day.

My next job for solving the term frequency problem is to split the lines into individual words, canonicalize them, and filter out stop words. Right now, all I know is that I have two more functions in my toolbox, I learned a little Joy, and writing some code made my day better.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 11, 2019 3:04 PM

Summer of Joy

"Elementary" ideas are really hard & need to be revisited
& explored & re-revisited at all levels of mathematical
sophistication. Doing so actually moves math forward.

-- James Tanton

Three summers ago, I spent a couple of weeks re-familiarizing myself with the concatenative programming language Joy and trying to go a little deeper with the style. I even wrote a few blog entries, including a few quick lessons I learned in my first week with the language. Several of those lessons hold up, but please don't look at the code linked there; it is the raw code of a beginner who doesn't yet get the idioms of the style or the language. Then other duties at work and home pulled me away, and I never made the time to get back to my studies.

my Summer of Joy folder

I have dubbed this the Summer of Joy. I can't devote the entire summer to concatenative programming, but I'm making a conscious effort to spend a couple of days each week in real study and practice. After only one week, I have created enough forward momentum that I think about problems and solutions at random times of the day, such as while walking home or making dinner. I think that's a good sign.

An even better sign is that I'm starting to grok some of the idioms of the style. Joy is different from other concatenative languages like Forth and Factor, but it shares the mindset of using stack operators effectively to shape the data a program uses. I'm finally starting to think in terms of dip, an operator that enables a program to manipulate data just below the top of the stack. As a result, a lot of my code is getting smaller and beginning to look like idiomatic Joy. When I really master dip and begin to think in terms of other "dipping" operators, I'll know I'm really on my way.

One of my goals for the summer is to write a Joy compiler from scratch that I can use as a demonstration in my fall compiler course. Right now, though, I'm still in Joy user mode and am getting the itch for a different sort of language tool... As my Joy skills get better, I find myself refactoring short programs I've written in the past. How can I be sure that I'm not breaking the code? I need unit tests!

So my first bit of tool building is to implement a simple JoyUnit. As a tentative step in this direction, I created the simplest version of RackUnit's check-equal? function possible:

    DEFINE check-equal == [i] dip i =.
This operator takes two quoted programs (a test expression and an expected result), executes them, and compares the results. For example, this test exercises a square function:
    [ 2 square ] [ 4 ] check-equal.

This is, of course, only the beginning. Next I'll add a message to display when a test fails, so that I can tell at a glance which tests have failed. Eventually I'll want my JoyUnit to support tests as objects that can be organized into suites, so that their results can be tallied, inspected, and reported on. But for now, YAGNI. With even a few simple functions like this one, I am able to run tests and keep my code clean. That's a good feeling.

To top it all off, implementing JoyUnit will force me to practice writing Joy and push me to extend my understanding while growing the set of programming tools I have at my disposal. That's another good feeling, and one that might help me keep my momentum as a busy summer moves on.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 10, 2019 2:04 PM

Teach, That's My Advice

In Tyler Cowen's conversation with poet Elisa New, he asks closes with one of his standard questions:

COWEN: Last question. You meet an 18-year-old, and this person wants in some way to be a future version of you, Elisa New, and asks you for advice. What advice do you give them?
NEW: Teach.
COWEN: Teach.
NEW: Yes, teach the young, and yes, that's the advice. Because what teaching is, is learning to converse with others. It's to experience a topic as it grows richer and richer under the attentions of a community. That's what a classroom that really works is. It's a community that's ever rewarding.

New's justification for teaching has two parts. The first struck me as central to the task of becoming a poet, or a writer of any sort: learning to converse -- to express and exchange ideas -- with others. To converse is to use words and to experience their effects, both as speaker and listener. Over my years in the classroom, I've come to appreciate this benefit of teaching. It's made me a better teacher and, if not a better writer, at least a writer more aware of the different ways in which I can express my ideas.

New's second justification captures well the central value of teaching to an academic. To teach is to experience a topic as it grows richer under the attention of a community. What a wonderful phrase!

Some people think that teaching will steal time from their work as a literary scholar, historian, or scientist. But teaching helps us to see deeper into our discipline by urging us to examine it over and over from new vantage points. Every new semester and every new student creates a new conversation for me, and these conversations remind me that there is even more to a topic than I think -- more often than I ever thought they would before I became a professor. Just when I think I've mastered something, working with students seems most likely to help me see something new, in a way different than I might see something new through my own study.

This exposes one of the advantages of working in a graduate program or in an ongoing research lab: building a community that has some continuity over time. Teaching at an undergraduate institution means that not as many of my students will be able to work with me and one another on the same topic over time. Even so, follow-up courses and undergrad research projects do allow us to create overlapping communities with a lifespan longer than a single semester. It simply requires a different mindset than working in a big research lab.

So I heartily echo Professor New: teach, that's my advice.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 16, 2019 3:40 PM

The Importance of Giving Credit in Context

From James Propp's Prof. Engel's Marvelously Improbable Machines:

Chip-firing has been rediscovered independently in three different academic communities: mathematics, physics, and computer science. However, its original discovery by Engel is in the field of math education, and I strongly feel that Engel deserves credit for having been the first to slide chips around following these sorts of rules. This isn't just for Engel's sake as an individual; it's also for the sake of the kind of work that Engel did, blending his expertise in mathematics with his experience in the classroom. We often think of mathematical sophistication as something that leads practitioners to create concepts that can only be understood by experts, but at the highest reaches of mathematical research, there's a love of clarity that sees the pinnacle of sophistication as being the achievement of hard-won simplicity in settings where before there was only complexity.

First of all, Petri nets! I encountered Petri nets for the first time in a computer architecture course, probably as a master's student, and it immediately became my favorite thing about the course. I was never much into hardware and architecture, but Petri nets showed me a connection back to graph theory, which I loved. Later, I studied how to apply temporal logic to modeling hardware and found another way to appreciate my architecture courses.

But I really love the point that Propp makes in this paragraph and the section it opens. Most people think of research and teaching as being different sort of activities. But the kind of thinking one does in one often crosses over into the other. The sophistication that researchers have and use help us make sense of complex ideas and, at their best, help us communicate that understanding to a wide audience, not just to researchers at the same level of sophistication. The focus that teachers put on communicating challenging ideas to relative novices can encourage us to seek new formulations for a complex idea and ways to construct more complex ideas out of the new formulations. Sometimes, that can lead to an insight we can use in research.

In recent years, my research has benefited a couple times from trying to explain and demonstrate concatenative programming, as in Forth and Joy, to my undergraduate students. These haven't been breakthroughs of the sort that Engel made with his probability machines, but they've certainly help me grasp in new ways ideas I'd been struggling with.

Propp argues convincingly that it's important that we tell stories like Engel's and recognize that his breakthrough came as a result of his work in the classroom. This might encourage more researchers to engage as deeply with their teaching as with their research. Everyone will benefit.

Do you know any examples similar to the one Propp relates, but in the field of computer science? If so, I would love to hear about them. Drop me a line via email or Twitter.

Oh, and if you like Petri nets, probability, or fun stories about teaching, do read Propp's entire piece. It's good fun and quite informative.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 17, 2019 10:59 AM

Are We Curious Enough About Our Students?

I ran across an old interview with Douglas Crockford recently. When asked what traits were common to the weak programmers he'd seen over his career, Crockford said:

That's an easy one: lack of curiosity. They were so satisfied with the work that they were doing was good enough (without an understanding of what 'good' was) that they didn't push themselves.

I notice a lack of curiosity in many CS students, too. It's even easier for beginners than professional programmers to be satisfied with meeting the minimal requirements of a project -- "I got the right answers!" or, much worse, "It compiles!" -- and not realize that good code can be more. Part of our goal as teachers is to help students develop higher standards and more refined taste while they are in school.

There's another sense, though, in which holding students' lack of curiosity against them is a dangerous trap for professors. In moments of weakness, I occasionally look at my students and think, "Why doesn't this excite them more? Why don't they want to write code for fun?" I've come to realize over the years that our school system doesn't always do much to help students cultivate their curiosity. But with a little patience and a little conversation, I often find that my students are curious -- just not always about the things that intrigue me.

This shouldn't be a surprise. Even at the beginning of my career as a prof, I was a different sort of person than most of my students. Now that I'm a few years older, it's almost certain that I will not be in close connection with my students and what interests them most. Why would they necessarily care about the things I care about?

Bridging this gap takes time and energy. I have to work to build relationships both with individuals and with the group of students taking my course each semester. This work requires patience, which I've learned to appreciate more and more as I've taught. We don't always have the time we need in one semester, but that's okay. One of the advantages of teaching at a smaller school is time: I can get to know students over several semesters and multiple courses. We have a chance to build relationships that enrich the students' learning experience -- and my experience as a teacher.

Trying to connect with the curiosity of many different students creates logistical challenges when designing courses, examples, and assignments, of course. I'm often drawn back to Alan Kay's paradigm, Frank Oppenheimer's Exploratorium, which Kay discussed in his Turing Award lecture. The internet is, in many ways, a programmer's exploratorium, but it's so big, so disorganized, and of such varying levels of quality... Can we create collections of demos and problems that will contain something to connect with just about every student? Many of my posts on this blog, especially in the early years, grappled with this idea. (Here are two that specifically mention the Exploratorium: Problems Are The Thing and Mathematics, Problems, and Teaching.)

Sometimes I think the real question isn't: "Why aren't students more curious?" It is: "Are we instructors curious enough about our students?"


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 10, 2019 10:53 AM

Weekend Shorts

Andy Ko, in SIGCSE 2019 report:

I always have to warn my students before they attend SIGCSE that it's not a place for deep and nuanced discussions about learning, nor is it a place to get critical feedback about their ideas.
It is, however, a wonderful place to be immersed in the concerns of CS teachers and their perceptions of evidence.

I'm not sure I agree that one can't have deep, nuanced discussions about learning at SIGCSE, but it certainly is not a research conference. It is a great place to talk to and learn from people in the trenches teaching CS courses, with a strong emphasis on the early courses. I have picked up a lot of effective, creative, and inspiring ideas at SIGCSE over the years. Putting them onto sure scientific footing is part of my job when I get back.

~~~~~

Stephen Kell, in Some Were Meant for C (PDF), an Onward! 2017 essay:

Unless we can understand the real reasons why programmers continue to use C, we risk researchers continuing to solve a set of problems that is incomplete and/or irrelevant, while practitioners continue to use flawed tools.

For example,

... "faster safe languages" is seen as the Important Research Problem, not better integration.

... whereas Kell believes that C's superiority in the realm of integration is one of the main reasons that C remains a dominant, essential systems language.

Even with the freedom granted by tenure, academic culture tends to restrict what research gets done. One cause is a desire to publish in the best venues, which encourages work that is valued by certain communities. Another reason is that academic research tends to attract people who are interested in a certain kind of clean problem. CS isn't exactly "round, spherical chickens in a vacuum" territory, but... Language support for system integration, interop, and migration can seem like a grungier sort of work than most researchers envisioned when they went to grad school.

"Some Were Meant for C" is an elegant paper, just the sort of work, I imagine, that Richard Gabriel had when envisioned the essays track at Onward. Well worth a read.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 24, 2019 9:35 AM

Normative

In a conversation with Tyler Cowen, economist John Nye expresses disappointment with the nature of discourse in his discipline:

The thing I'm really frustrated by is that it doesn't matter whether people are writing from a socialist or a libertarian perspective. Too much of the discussion of political economy is normative. It's about "what should the ideal state be?"
I'm much more concerned with the questions of "what good states are possible?" And once good states are created that are possible, what good states are sustainable? And that, in my view, is a still understudied and very poorly understood issue.

For some reason, this made me think about software development. Programming styles, static and dynamic typing, software development methodologies, ... So much that is written about these topics tells us what's the right the thing to do. "Do this, and you will be able to reliably make good software."

I know I've been partisan on some of these issues over the course of my time as a programmer, and I still have my personal preferences. But these days especially I find myself more interested in "what good states are sustainable?". What has worked for teams? What conditions seem to make those practices work well or not so well? How do teams adapt to changes in the domain or the business or the team's make-up?

This isn't too say that we can't draw conclusions about forces and context. For example, small teams seem to make it easier to adapt to changing conditions; to make it work with bigger teams, we need to design systems that encourage attention and feedback. But it does mean that we can help others succeed without always telling them what they must do. We can help them find a groove that syncs with them and the conditions under which they work.

Standing back and surveying the big picture, it seems that a lot of good states are sustainable, so long as we pay attention and adapt to the world around us. And that should be good news to us all.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 29, 2019 1:46 PM

Dependencies and Customizable Books

Shriram Krishnamurthi, in Books as Software:

I have said that a book is a collection of components. I have concrete evidence that some of my users specifically excerpt sections that suit their purpose. ...

I forecast that one day, rich document formats like PDF will recognize this reality and permit precisely such specifications. Then, when a user selects a group of desired chapters to generate a thinner volume, the software will automatically evaluate constraints and include all dependencies. To enable this we will even need "program" analyses that help us find all the dependencies, using textual concordances as a starting point and the index as an auxiliary data structure.

I am one of the users Krishnamurthi speaks of, who has excerpted sections from his Programming Languages: Application and Interpretation to suit the purposes of my course. Though I've not written a book, I do post, use, adapt, and reuse detailed lecture notes for my courses, and as a result I have seen both sides of the divide he discusses. I occasionally change the order of topics in a course, or add a unit, or drop a unit. An unseen bit of work is to account for the dependencies among concepts, examples, problems, and code in the affected sections, but also in the new whole. My life is simpler than book writers who have to deal at least in part with rich document formats: I do everything in a small, old-style subset of HTML, which means I can use simple text-based tools for manipulating everything. But dependencies? Yeesh.

Maybe I need to write a big makefile for my course notes. Alas, that would not help me manage dependencies in the way I'd like, or in the way Krishnamurthi forecasts. As such, it would probably make things worse. I suppose that I could create the tool I need.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

January 08, 2019 2:10 PM

Sometimes, Copy and Paste Is the Right Thing To Do

Last week I blogged about writing code that is easy to delete, drawing on some great lines from an old 'programming is terrible' post. Here's another passage from @tef's post that's worth thinking more about:

Step 1: Copy-paste code

Building reusable code is something that's easier to do in hindsight with a couple of examples of use in the code base, than foresight of ones you might want later. On the plus side, you're probably re-using a lot of code already just by using the file-system, why worry that much? A little redundancy is healthy.

It's good to copy-paste code a couple of times, rather than making a library function, just to get a handle on how it will be used. Once you make something a shared API, you make it harder to change.

There's not a great one-liner in there, but these paragraphs point to a really important lesson, one that we programmers sometimes have a hard time learning. We are told so often "don't repeat yourself" that we come to think that all repetition is the same. It's not.

One use of repetition is in avoiding what @tef calls, in another 'programming is terrible' post, "preemptive guessing". Consider the creation of a new framework. Oftentimes, designing a framework upfront doesn't work very well because we don't know the domain's abstractions yet. One of the best ways to figure out what they are is to write several applications first, and let framework fall out the applications. While doing this, repetition is our friend: it's most useful to know what things don't change from one application to another. This repetition is a hint on how to build the framework we need. I learned this technique from Ralph Johnson.

I use and teach a similar technique for programming in smaller settings, too. When we see two bits of code that resemble one another, it often helps to increase the duplication in order to eliminate it. (I learned this idea from Kent Beck.) In this case, the goal of the duplication is to find useful abstractions. Sometimes, though, code duplication is really a hint to think differently about a problem. Factoring out a function or class -- finding a new abstraction -- may be incidental to the learning that takes place.

For me, this line from from the second programming-is-terrible post captures this idea perfectly:

... duplicate to find the right abstraction first, then deduplicate to implement it.

My spell checker objects to the word "deduplicate", but I'll allow it.

All of these ideas taken together are the reason that I think copy-and-paste gets an undeservedly bad name. Used properly, it is a valuable programming technique -- essential, really. I've long wanted to write a Big Ball of Mud-style paper about copy-and-paste patterns. There are plenty of good reasons why we write repetitive code and, as @tef says in the two posts I link to above, sometimes leaving duplication in your code is the right thing to do.

One final tribute to repetition for now. While researching this blog post, I ran across a blog entry of mine from October 2016. Apparently, I had just read @tef's Write code that is easy to delete... post and felt an undeniable urge to quote and comment on it. If you read that 2016 post, you'll see that my Writing code that is easy to delete post from last week duplicates it in spirit and, in a few cases, even the details. I swear that I read @tef's post again last week and wrote the new blog entry from scratch, with no memory of the 2016 events. I am perfectly happy with this second act. Sometimes, ideas circle through our brains again, changing us in imperceptible ways. As @tef says, a little redundancy is healthy.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 06, 2019 10:10 AM

Programming Never Feels Easier. You Just Solve Harder Problems.

Alex Honnold is a rock climber who was the first person to "free solo" Yosemite's El Capitan rock wall. In an interview for a magazine, he was asked what it takes to reach ever higher goals. One bit of advice was to "aim for joy, not euphoria". When you prepare to achieve a goal, it may not feel like a big surprise when you achieve it because you prepared to succeed. Don't expect to be overwhelmed by powerful emotions when you accomplish something new; doing so sets too high a standard and can demotivate you.

This paragraph, though, is the one that spoke to me:

Someone recently said to me about running: "It never feels easier--you just go faster." A lot of sports always feel terrible. Climbing is always like that. You always feel weak and like you suck, but you can do harder and harder things.

As a one-time marathoner, never fast but always training to run a PR in my next race, I know what Honnold means. However, I also feel something similar as a programmer. Writing software often seems like a slog that I'm not very good at. I'm forever looking up language features in order to get my code to work, and then I end up with programs that are bulky and fight me at every turn. I refactor and rewrite and... find myself back in the slog. I don't feel terrible all that often, but I am usually a little on edge.

Yet if I compare the programs I write today with ones I wrote 5 or 10 or 30 years ago, I can see that I'm doing more interesting work. This is the natural order. Once I know how to do one thing, I seek tougher problems to solve.

In the article, the passage quoted above is labeled "Feeling awful is normal." I wonder if programming feels more accessible to people who are comfortable with a steady, low-grade intellectual discomfort punctuated by occasional bouts of head banging. Honnold's observation might reassure beginning programmers who don't already know that feeling uneasy is a natural part of pushing yourself to do more interesting work.

All that said, even when I was training for my next marathon, I was always able to run for fun. There was nothing quite like an easy run at sunrise to boost my mood. Fortunately, I am still able to program that way, too. Every once in a while, I like to write code to solve some simple problem I run across on Twitter or in a blog entry somewhere. I find that these interludes recharge me before I battling the next big problem I decide to tackle. I hope that my students can still programming in this way as they advance on to bigger challenges.


Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

January 02, 2019 2:22 PM

Writing Code that is Easy to Delete

Last week someone tweeted a link to Write code that is easy to delete, not easy to extend. It contains a lot of great advice on how to create codebases that are easy to maintain and easy to change, the latter being an essential feature of almost any code that is the former. I liked this article so much that I wanted to share some of its advice here. What follows are a few of the many one- and two-liners that serve as useful slogans for building maintainable software, with light commentary.

... repeat yourself to avoid creating dependencies, but don't repeat yourself to manage them.

This line from the first page of the paper hooked me. I'm not sure I had ever had this thought, at least not so succinctly, but it captures a bit of understanding that I think I had. Reading this, I knew I wanted to read the rest of the article.

Make a util directory and keep different utilities in different files. A single util file will always grow until it is too big and yet too hard to split apart. Using a single util file is unhygienic.

This isn't the sort of witticism that I quote in the rest of this post, but its solid advice that I've come to live by over the years. I have this pattern.

Boiler plate is a lot like copy-pasting, but you change some of the code in a different place each time, rather than the same bit over and over.

I really like the author's distinction between boilerplate and copy-and-paste. Copy-and-paste has valuable uses (heresy, I know; more later), whereas boilerplate sucks the joy out of almost every programmer's life.

You are writing more lines of code, but you are writing those lines of code in the easy-to-delete parts.

Another neat distinction. Even when we understand that lines of code are an expense as much as (or instead of) an investment, we know that sometimes we have write more code. Just do it in units that are easy to delete.

A lesson in separating concerns, from Python libraries:

requests is about popular http adventures, urllib3 is about giving you the tools to choose your own adventure.

Layers! I have had users of both of these libraries suggest that the other should not exist, but they serve different audiences. They meet different needs in a way that that more than makes up for the cost of the supposed duplication.

Building a pleasant to use API and building an extensible API are often at odds with each other.

There's nothing earth-shattering in this observation, but I like to highlight different kinds of trade-off whenever I can. Every important decision we make writing programs is a trade-off.

Good APIs are designed with empathy for the programmers who will use it, and layering is realising we can't please everyone at once.

This advice elaborates on the quote earlier to repeat code in order not to create dependencies, but not to manage them. Creating a separate API is one way to avoid dependencies to code that are hard to delete.

Sometimes it's easier to delete one big mistake than try to delete 18 smaller interleaved mistakes.

Sometimes it really is best to write a big chunk of code precisely because it is easy to delete. An idea that is distributed throughout a bunch of functions or modules has to be disentangled before you can delete it.

Becoming a professional software developer is accumulating a back-catalogue of regrets and mistakes.

I'm going to use this line in my spring Programming Languages class. There are unforeseen advantages to all the practice we profs ask students to do. That's where experience comes from.

We are not building modules around being able to re-use them, but being able to change them.

This is another good bit of advice for my students, though I'll write this one more clearly. When students learn to program, textbooks often teach them that the main reason to write a function is that you can reuse it later, thus saving the effort of writing similar code again. That's certainly one benefit of writing a function, but experienced programmers know that there are other big wins in creating functions, classes, and modules, and that these wins are often even more valuable than reuse. In my courses, I try to help students appreciate the value of names in understanding and modifying code. Modularity also makes it easier to change and, yes, delete code. Unfortunately, students don't always get the right kind of experience in their courses to develop this deeper understanding.

Although the single responsibility principle suggests that "each module should only handle one hard problem", it is more important that "each hard problem is only handled by one module".

Lovely. The single module that handles a hard problem is a point of leverage. It can be deleted when the problem goes away. It can be rewritten from scratch when you understand the problem better or when the context around the problem changes.

This line is the heart of the article:

The strategies I've talked about -- layering, isolation, common interfaces, composition -- are not about writing good software, but how to build software that can change over time.

Good software is software that can you can change. One way to create software you can change is to write code that you can easily replace.

Good code isn't about getting it right the first time. Good code is just legacy code that doesn't get in the way.

A perfect aphorism to close to the article, and to perfect way to close this post: Good code is legacy code that doesn't get in the way.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

December 26, 2018 2:44 PM

It's Okay To Say, "I Don't Know." Even Nobel Laureates Do It.

I ran across two great examples of humility by Nobel Prize-winning economists in recent conversations with Tyler Cowen. When asked, "Should China and Japan move to romanized script?", Paul Romer said:

I basically don't know the answer to that question. But I'll use that as a way to talk about something else ...

Romer could have speculated or pontificated; instead, he acknowledged that he didn't know the answer and pivoted the conversation to a related topic he had thought about (reforming spelling in English, for which he offered an interesting computational solution). By shifting the topic, Romer added value to the conversation without pretending that any answer he could give to the original question would have more value than as speculation.

A couple of months ago, Cowen sat with Paul Krugman. When asked whether he would consider a "single land tax" as a way to encourage a more active and more equitable economy, Krugman responded:

I just haven't done my homework on that.

... and left it there. To his credit, Cowen did not press for an uninformed answer; he moved on to another question.

I love the attitude that Krugman and Romer adopt and really like Krugman's specific answer, which echoed his response to another question earlier in the conversation. We need more people answering questions this way, more often and in more circumstances.

Such restraint is probably even more important in the case of Nobel laureates. If Romer and Klugman choose to speculate on a topic, a lot of people will pay attention, even if it is a topic they know little about. We might learn something from their speculations, but we might also forget that they are only uninformed speculation.

I think what I like best about these answers is the example that Romer and Klugman set for the rest of us: It's okay to say, "I don't know." If you have not done the homework needed to offer an informed answer, it's often best to say so and move on to something you're better prepared to discuss.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

December 23, 2018 10:45 AM

The Joy of Scholarship

This morning I read Tyler Cowen's conversation with Paul Romer. At one point, Romer talks about being introduced to C.S. Peirce, who had deep insights into "abstraction and how we use abstraction to communicate" (a topic Romer and Cowen discuss earlier in the interview). Romer is clearly enamored with Peirce's work, but he's also fascinated by the fact that, after a long career thinking about a set of topics, he could stumble upon a trove of ideas that he didn't even know existed:

... one of the joys of reading -- that's not a novel -- but one of the joys of reading, and to me slightly frightening thing, is that there's so much out there, and that a hundred years later, you can discover somebody who has so many things to say that can be helpful for somebody like me trying to understand, how do we use abstraction? How do we communicate clearly?

But the joy of scholarship -- I think it's a joy of maybe any life in the modern world -- that through reading, we can get access to the thoughts of another person, and then you can sample from the thoughts that are most relevant to you or that are the most powerful in some sense.

This process, he says, is the foundation for how we transmit knowledge within a culture and across time. It's how we grow and share our understanding of the world. This is a source of great joy for scholars and, really, for anyone who can read. It's why so many people love books.

Romer's interest in Peirce calls to mind my own fascination with his work. As Romer notes, Peirce had a "much more sophisticated sense about how science proceeds than the positivist sort of machine that people describe". I discovered Peirce through an epistemology course in graduate school. His pragmatic view of knowledge, along with William James's views, greatly influenced how I thought about knowledge. That, in turn, redefined the trajectory by which I approached my research in knowledge-based systems and AI. Peirce and James helped me make sense of how people use knowledge, and how computer programs might.

So I feel a great kinship with Romer in his discovery of Peirce, and the joy he finds in scholarship.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

November 28, 2018 1:56 PM

If it matters enough to be careful, it matters enough to build a system.

In Quality and Effort, Seth Godin reminds us that being careful can take us only so far toward the quality we seek. Humans make mistakes, so we need processes and systems in place to help us avoid them. Near the end of the post, he writes:

In school, we harangue kids to be more careful, and spend approximately zero time teaching them to build better systems instead. We ignore checklists and processes because we've been taught that they're beneath us.

This paragraph isolates one of the great powers we can teach our students, but also a glaring weakness in how most of us actually teach. I've been a professor for many years now, and before that I was a student for many years. I've seen a lot of students succeed and a few not do as well as they or I had hoped. Students who are methodical, patient, and disciplined in how they read, study, and program are much more likely to be in the successful group.

Rules and discipline sometimes get a bad rap these days. Creativity and passion are our catchwords. But dull, boring systems are often the keys that unlock the doors to getting things done and moving on to learn and do cooler things.

Students occasionally ask me why I slavishly follow the processes I teach them, whether it's a system as big as test-first development and refactoring or a process as simple as the technique for creating NFAs and converting them to DFAs. I tell them that I don't always trust myself but that I do trust the system: it almost always leads me to a working solution. Sure, in this moment or that I might be fine going off script, but... Good habits generate positive returns, while freewheeling it too often lets an error sneak in. (When I go off script in class, they too often get to see just that!)

We do our students a great favor when when we help them learn systems that work. Following a design recipe or something like it may be boring, but students who learn to follow it develop stronger programming skills, enjoy the success of getting projects done successfully, and graduate on to more interesting problems. As the system becomes ingrained as habit, they usually begin to appreciate it as an enabler of their success.

I agree with Godin: If it matters enough to be careful, then it matters enough to build (or learn) a system.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 25, 2018 10:50 AM

Seek Out Idea Amplifiers, Not Sound Absorbers

Richard Hamming -- he of Hamming codes, Hamming numbers, and Hamming distance -- wasn't a big fan of brainstorming. He preferred to talk about his ideas with another capable person, because the back-and-forth was more likely to help the idea reach "critical mass". But not every capable person is a good partner for such conversations:

There is also the idea I used to call 'sound absorbers'. When you get too many sound absorbers, you give out an idea and they merely say, "Yes, yes, yes." What you want to do is get that critical mass in action; "Yes, that reminds me of so and so," or, "Have you thought about that or this?" When you talk to other people, you want to get rid of those sound absorbers who are nice people but merely say, "Oh yes," and to find those who will stimulate you right back.

What a simple bit of advice: seek out idea amplifiers, not sound absorbers. Talk with people who help you expand and refine your ideas, by making connections to other work or by pushing you to consider implications or new angles. I think that talking to the right people can boost your work in another important way, too: they will feed your motivation, helping you to maintain the energy you need to stay excited and active.

This is one of the great benefits of blogs and Twitter, used well. We have so many more opportunities than ever before to converse with the right people. Unlike Hamming, I can't walk the halls of Bell Labs, or on any regular basis walk the halls of a big research university or one of the great computing companies. But I can read the blogs written by the researchers who work there, follow them on Twitter, and participate in amplifying conversations. Blogs and Twitter are great sources of both ideas and encouragement.

(The passage quoted above comes from the question-and-answer portion of Hamming's classic 1986 talk, You and Your Research. I re-read it this morning, as I occasionally do, because it's one of those papers that causes me to be more intentional in how I approach my work. Like Hamming, "I have a lot of faults", which means that there is considerable value to be found in judicious self-management. I need periodic booster shots.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

November 19, 2018 3:35 PM

Why Don't More CS Professors Use the Results of CS Education Research?

Over the last few days, there has been an interesting Twitter thread on the challenges of getting CS instructors to adopt the results of CS education research. Actually, it started as a thread and grew into a tree... Here's a reasonable jumping-in spot if you'd like to explore.

I imagine that it's hard to get research results into the classroom in a lot of disciplines. Research can be abstract and is usually out of context for any given school. But even when research groups implement curricula and train teachers, it can be hard to spread practices that we know work well.

The answer educators give for not adopting research results often boils down to, "It's complicated." Schools are different; students are different; the end goals are different. Indeed, the original Twitter thread spawned a student diversity conversation that picked up when the energy of the original topic waned. This tweet is a nice anchor for that conversation:

Majors vs. non-majors. Undergrads vs. high school vs. elementary school. Adult software developers vs end-user programmers vs conversational programmers. Different goals: authenticity, security, software engineering, algorithmic thinking, practice production. No one way.

All of these distinctions are real. I've seen several of them affect how we teach introductory programming at my school. And let's be honest: when we talk about research in CS education, we are often talking almost exclusively about teaching programming. How we teach intro programming can, though, have a big effect on student performance in later courses.

But even in the face of these different dimensions, I still wonder why more of us don't incorporate more research results into our teaching. I have a few ideas borne out of my years as a faculty member.

I think a big part of the effect that students have on whether faculty adopt new pedagogical approaches comes an angle not mentioned in that list: competitive pressure. Departments fear that if they adopt an unusual curriculum or an unusual first language, they will lose students. These fears manifest themselves if you propose to teach a relatively ordinary language such as Ada; they grow quickly if your approach uses an esoteric parentheses language (EPL). Parents of HS students will say, "Your competitors don't use EPL or your unusual pedagogical approach. They teach Good Old-Fashioned CS, the way it's always been, a way that's proven to get graduates jobs. Why shouldn't we send our daughters and sons to another school?"

HS students and their parents do ask questions such as these, and it takes a lot of perseverance to respond over and over. Only the most committed believers can stay the course. Unfortnately, it can take a few years for the benefits of a new approach to show up in upper division courses, let alone in the performance of graduates. New curricula rarely have that much time to succeed. If the department's faculty aren't all fully committed to an approach, then as soon as enrollments dip a bit, they join the chorus. There's a scapegoat ready at hand.

Even if we step back from the competitive disadvantage angle, we can see a similar phenomenon at play. The tweet quoted above list many different kinds of learners. These distinctions can serve as a handy reason for faculty not to change when faced with CS ed research that may only apply to one student population or one context. And resistance to change is a strong force.

Speaking as a long-time professor, I think the most powerful forces against change arise not on the student side of the equation but on the teachers'. Learning a bunch of new techniques, integrating them into a coherent curriculum, and then mastering them is hard. It takes time profs don't always have a lot of. When they do have some time, reading and applying CS ed research pulls them away from their own research or even away from the things they like to do in the classroom.

Personal confession time. This all reminds me of a conversation I had with Mike Clancy many years ago. He was part of the team that developed the idea of using case studies to teach programming. I've always admired the approach and thought the intro textbook he and Marcia Linn built around it was pretty cool. When I told Mike this, he asked, "Well then, why aren't you using them? Why aren't you writing case studies of your own?" These were great questions that took me aback. My answers sound like the excuses I've talked about in this post... My colleagues were skeptical of the approach and preferred other textbooks; convincing them to change would require a lot of work and political capital. Writing case studies is hard work, and at the time I was more excited to work on my own somewhat-related ideas. "Soon", I said. But soon never came.

I sit here still admiring case studies -- and worked examples, and Parsons problems, and the design recipe, and How to Design Programs. I've incorporated the idea of design recipes into my Programming Languages course, but I haven't fully committed to it yet. The others? They are still dreams. (I did try my first Parsons problem this semester: an assembly language program in my compilers course. I liked the result and think I'll try more next semester when I teach my students functional programming in Racket.)

Am I slow to adopt evidence-backed pedagogy? Lazy? Short on time? I don't know, but I suspect that there are many reasons. Change is hard. I do hope that I haven't done my students too much disservice in the meantime.

At one point in the Twitter conversation that launched this post, someone wrote something to the effect, "convincing researchers is easier than convincing practitioners". But that brings to mind another question: Why aren't all CS ed researchers advocating the use of worked examples? Design recipe and HTDP-style curricula? Parsons problems? Why aren't more CS ed researchers using them in their own classrooms? Maybe they are and I haven't been paying attention. If that's so, then I doubt that many other CS profs know about this widespread adoption of advances among CS ed researchers either.

Some disciplines, such as physics, seem to have developed more of a culture for incorporating education research into the classroom than computer science. Can we change that? If so, I think there is room for the CS ed research community to lead the way. But... it's complicated.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 21, 2018 9:53 AM

Find the Hard Work You're Willing to Do

I like this passage from John Urschel Goes Pro, about the former NFL player who is pursuing a Ph.D. in math:

The world thinks mathematicians are people for whom math is easy. That's wrong. Sure, some kids, like Urschel, have little trouble with school math. But everyone who starts down the road to creating really new mathematics finds out what Urschel did: It's a struggle. A prickly, sometimes lonely struggle whose rewards are uncertain and a long time coming. Mathematicians are the people who love that struggle.

It's cliché to tell kids to "find their passion". That always seems to me like an awful lot of pressure to put on young adults, let alone teenagers. I meet with potential CS majors frequently, both college students and high school students. Most haven't found their passion yet, and as a result many wonder if there is something wrong with them. I do my my best to assure them that, no, there is nothing wrong with them. It's an unreasonable expectation placed on them by a world that, usually with good intentions, is trying to encourage them.

I don't think there is anything I'd rather be than a computer scientist, but I did not walk a straight path to being one. Some choices early on were easy: I like biology as a body of knowledge, but I never liked studying biology. That seemed a decent sign that maybe biology wasn't for me. (High-school me didn't understand that there might be a difference between school biology and being a biologist...) But other choices took time and a little self-awareness.

From the time I was eight years old or so, I wanted to be an architect. I read about architecture; I sent away for professional materials from the American Institute of Architects; I took courses in architectural drafting at my high school. (There was an unexpected benefit to taking those courses: I got to meet a lot of people were not part of my usual academic crowd.) Then I went off to college to study architecture... and found that, while I liked many things about the field, I didn't really like to do the grunt work that is part of the architecture student's life, and when the assigned projects got more challenging, I didn't really enjoy working on them.

But I had enjoyed working on the hard projects I'd encountered in my programing class back in high school. They were challenges I wanted to overcome. I changed my major and dove into college CS courses, which were full of hard problems -- but hard problems that I wanted to solve. I didn't mind being frustrated for an entire semester one year, working in assembly language and JCL, because I wanted to solve the puzzles.

Maybe this is what people mean when they tell us to "find our passion", but that phrase seems pretty abstract to me. Maybe instead we should encourage people to find the hard problems they like to work on. Which problems do you want to keep working on, even when they turn out to be harder than you expected? Which kinds of frustration do you enjoy, or at least are willing to endure while you figure things out? Answers to these very practical questions might help you find a place where you can build an interesting and rewarding life.

I realize that "Find your passion" makes for a more compelling motivational poster than "What hard problems do you enjoy working on?" (and even that's a lot better than "What kind of pain are you willing to endure?"), but it might give some people a more realistic way to approach finding their life's work.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

October 05, 2018 3:39 PM

Why Patterns Failed and Why You Should Care

Beds are useful. I enjoyed my bed at the hotel last night. But you won't find a pattern called bed in A Pattern Language, and that's because we don't have a problem with beds. Beds are solved. What we have a problem with is building with beds. And that's why, if you look in A Pattern Language, you'll find there are a number of patterns that involve beds.

Now, the analogy I want to make is, basically, the design patterns in the book Design Patterns are at the same level as the building blocks in the book A Pattern Language. So Bridge, which is one of the design patterns, is the same kind of thing as column. You can call it a pattern, but it's not really a pattern that gets at the root problem we're trying to solve. And if Alexander had written a book that had a pattern called Bed and another one called Chair, I imagine that that book would have failed and not inspired all of the people that the actual book did inspire. So that, I claim, is why patterns failed.

Those two nearly-sequential paragraphs are from Patterns Failed. Why? Should We Care?, a talk Brian Marick gave at Deconstruct 2017. It's a good talk. Marick's provocative claim that, as an idea, software patterns failed is various degrees of true and false depending on how you define 'patterns' and 'failed'. It's hard to declare the idea an absolute failure, because a lot of software developers and UI designers use patterns to good effect as a part of their work every day. But I agree with Brian that software patterns failed to live up to their promise or to the goals of the programmers who worked so hard to introduce Christopher Alexander's work to the software community. I agree, too, that the software pattern community's inability to document and share patterns at the granularity that made Alexander's patterns so irresistible is a big part of why.

I was a second- or third-generation member of the patterns community, joining in after Design Patterns had been published and, like Marick, I worked mostly on the periphery. Early on I wrote patterns that related to my knowledge-based systems work. Much of my pattern-writing, though, was at the level of elementary patterns, the patterns that novice programmers learn when they are first learning to program. Even at that level, the most useful patterns often were ones that operated up one level from the building blocks that novices knew.

Consider Guarded Linear Search, from the Loop Patterns paper that Owen Astrachan and I workshopped at PLoP 1998. It helps beginners learn how to arrange a loop and an if statement in a way that achieves a goal. Students in my beginning courses often commented how patterns like this one helped them write programs because, while they understood if statements and for statements and while statements, they didn't always know what to do with them when facing a programming task. At that most elementary level, the Guarded Linear Search pattern was a pleasing -- and useful -- whole.

That said, there aren't many "solved problems" for beginners, so we often wrote patterns that dropped down to the level of building blocks simply to help novices learn basic constructs. Some of the Loop Patterns paper does that, as does Joe Bergin's Patterns for Selection. But work in the elementary patterns community would have been much more valuable, and potentially had more effect, if we had thought harder about how to find and document patterns at the level of Alexander's patterns.

Perhaps the patterns sub-community in which I've worked in which best achieved its goals was the pedagogical patterns community. These are not software patterns but rather patterns of teaching techniques. They document solutions to problems that teachers face every day. I think I'd be willing to argue that the primary source of pedagogical patterns' effectiveness is that these solutions combine more primitive building blocks (delivery techniques, feedback techniques, interaction techniques) in a way that make learning and instruction both more effective and more fun. As a result, they captured a lot of teachers' interest.

I think that Marick's diagnosis also points out the error in a common criticism of software patterns. Over the years, we often heard folks say that software patterns existed only because people used horrible languages like C++ and Java. In a more powerful language, any purported pattern would be implemented as a language primitive or could be made a primitive by writing a macro. But this misses the point of Alexander's insight. The problem in software development isn't with inheritance and message passing and loops, just as the problem in architecture isn't with beds and windows and space. It's with finding ways to arrange these building blocks in a way that's comfortable and nice and, to use Alexander's term, "life-giving". That challenge exists in all programming languages.

Finally, you probably won't be surprised to learn that I agree with Marick that we should all care that software patterns failed to live up to their promise. Making software is fun, but it's also challenging. Alexander's idea of a pattern language is one way that we might help programmers do their jobs better, enjoy their jobs more, and produce software that is both functional and habitable. The first pass was a worthy effort, and a second pass, informed by the lessons of the first, might get us closer to the goal.

Thanks to Brian for giving this talk and to the folks at Deconstruct for posting it online. Go watch it.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

September 30, 2018 10:31 AM

Strange Loop 2: Simon Peyton Jones on Teaching CS in the Schools

Simon Peyton discusses one of the myths of getting CS into the primary and secondary classroom: it's all about the curriculum

The opening keynote this year was by Simon Peyton Jones of Microsoft Research, well known in the programming languages for Haskell and many other things. But his talk was about something considerably less academic: "Shaping Our Children's Education in Computing", a ten-year project to reform the teaching of computing in the UK primary and secondary schools. It was a wonderful talk, full of history, practical advice, lessons learned, and philosophy of computing. Rather than try to summarize everything Peyton Jones said, I will let you watch the video when it is posted (which will be as early as next week, I think).

I would, though, like to highlight one particular part of the talk, the way he describes computer science to a non-CS audience. This is an essential skill for anyone who wants to introduce CS to folks in education, government, and the wider community who often see CS as either hopelessly arcane or as nothing more than a technology or a set of tools.

Peyton Jones characterized computing as being about information, computation, and communication. For each, he shared one or two ways to discuss the idea with an educated but non-technical audience. For example:

  • Information.   Show two images, say the Mona Lisa and a line drawing of a five-pointed star. Ask which contains more information. How can we tell? How can we compare the amounts? How might we write that information down?

  • Computation.   Use a problem that everyone can relate to, such as planning a trip to visit all the US state capitals in the fewest miles or sorting a set of numbers. For the latter, he used one of the activities from CS Unplugged on sorting networks as an example.

  • Communication.   Here, Peyton Jones used the elegant and simple idea underlying the Diffie Hellman algorithm for sharing secret as his primary example. It is simple and elegant, yet it's not at all obvious to most people who don't already know it that the problem can be solved at all!

In all three cases, it helps greatly to use examples from many disciplines and to ask questions that encourage the audience to ask their own questions, form their own hypotheses, and create their own experiments. The best examples and questions actually enable people to engage with computing through their own curiosity and inquisitiveness. We are fascinated by computing; other people can be, too.

There is a huge push in the US these days for everyone to learn how to program. This creates a tension among many of us computer scientists, who know that programming isn't everything that we do and that its details can obscure CS as much as they illuminate it. I thought that Peyton Jones used a very nice analogy to express the relationship between programming and CS more broadly: Programming is to computer science as lab work is to physics. Yes, you could probably take lab work out of physics and still have physics, but doing so would eviscerate the discipline. It would also take away a lot of what draws people to the discipline. So it is with programming and computer science. But we have to walk a thin line, because programming is seductive and can ultimately distract us from the ideas that make programming so valuable in the first place.

Finally, I liked Peyton Jones's simple summary of the reasons that everyone should learn a little computer science:

  • Everyone should be able to create digital media, not just consume it.
  • Everyone should be able to understand their tools, not just use them.
  • People should know that technology is not magic.
That last item grows increasingly important in a world where the seeming magic of computers redefines every sector of our lives.

Oh, and yes, a few people will get jobs that use programming skills and computing knowledge. People in government and business love to hear that part.

Regular readers of this blog know that I am a sucker for aphorisms. Peyton Jones dropped a few on us, most earnestly when encouraging his audience to participate in the arduous task of introducing and reforming the teaching CS in the schools:

  • "If you wait for policy to change, you'll just grow old. Get on with it."
  • "There is no 'them'. There is only us."
(The second of these already had a home in my brain. My wife has surely tired of hearing me say something like it over the years.)

It's easy to admire great researchers who have invested so much time and energy into solving real-world problems, especially in our schools. As long as this post is, it covers only a few minutes from the middle of the talk. My selection and bare-bones outline don't do justice to Peyton Jones's presentation or his message. Go watch the talk when the video goes up. It was a great way to start Strange Loop.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 20, 2018 4:44 PM

Special Numbers in a Simple Language

This fall I am again teaching our course in compiler development. Working in teams of two or three, students will implement from scratch a complete compiler for a simple functional language that consists of little more than integers, booleans, an if statement, and recursive functions. Such a language isn't suitable for much, but it works great for writing programs that do simple arithmetic and number theory. In the past, I likened it to an integer assembly language. This semester, my students are compiling a Pascal-like language of this sort that call Flair.

If you've read my blog much in the falls over the last decade or so, you may recall that I love to write code in the languages for which my students write their compilers. It makes the language seem more real to them and to me, gives us all more opportunities to master the language, and gives us interesting test cases for their scanners, parsers, type checkers, and code generators. In recent years I've blogged about some of my explorations in these languages, including programs to compute Farey numbers and excellent numbers, as well as trying to solve one of my daughter's AP calculus problems.

When I run into a problem, I usually get an itch to write a program, and in the fall I want to write it in my students' language.

Yesterday, I began writing my first new Flair program of the semester. I ran across this tweet from James Tanton, which starts:

N is "special" if, in binary, N has a 1s and b 0s and a & b are each factors of N (so non-zero).

So, 10 is special because:

  • In binary, 10 is 1010.
  • 1010 contains two 1s and two 0s.
  • Two is a factor of 10.

9 is not special because its binary rep also contains two 1s and two 0s, but two is not a factor of 9. 3 is not special because its binary rep has no 0s at all.

My first thought upon seeing this tweet was, "I can write a Flair program to determine if a number is special." And that is what I started to do.

Flair doesn't have loops, so I usually start every new program by mapping out the functions I will need simply to implement the definition. This makes sure that I don't spend much time implementing loops that I don't need. I ended up writing headers and default bodies for three utility functions:

  • convert a decimal number to binary
  • count the number of times a particular digits occurs in a number
  • determine if a number x divides evenly into a number n

With these helpers, I was ready to apply the definition of specialness:

    return divides(count(1, to_binary(n)), n)
       and divides(count(0, to_binary(n)), n)

Calling to_binary on the same argument is wasteful, but Flair doesn't have local variables, either. So I added one more helper to implement the design pattern "Function Call as Variable Assignment", apply_definition:

    function apply_definition(binary_n : integer, n : integer) : boolean
and called it from the program's main:
    return apply_definition(to_binary(n), n)

This is only the beginning. I still have a lot of work to do to implement to_binary, count and divides, using recursive function calls to simulate loops. This is another essential design pattern in Flair-like languages.

As I prepared to discuss my new program in class today, I found bug: My divides test was checking for factors of binary_n, not the decimal n. I also renamed a function and one of its parameters. Explaining my programs to students, a generalization of rubber duck debugging, often helps me see ways to make a program better. That's one of the reasons I like to teach.

Today I asked my students to please write me a Flair compiler so that I can run my program. The course is officially underway.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

September 05, 2018 3:58 PM

Learning by Copying the Textbook

Or: How to Learn Physics, Professional Golfer Edition

Bryson DeChambeau is a professional golfer, in the news recently for consecutive wins in the FedExCup playoff series. But he can also claim an unusual distinction as a student of physics:

In high school, he rewrote his physics textbook.

DeChambeau borrowed the textbook from the library and wrote down everything from the 180-page book into a three-ring binder. He explains: "My parents could have bought one for me, but they had done so much for me in golf that I didn't want to bother them in asking for a $200 book. ... By writing it down myself I was able to understand things on a whole comprehensive level.

I imagine that copying texts word-for-word was a more common learning strategy back when books were harder to come by, and perhaps it will become more common again as textbook prices rise and rise. There is certainly something to be said for it. Writing by hand takes time, and all the while our brains can absorb terms, make connections among concepts, and process the material into long-term memory. Zed Shaw argues for this as a great way to learn computer programming, implementing it as a pedagogical strategy in his "Learn <x> the Hard Way" series of books. (See Learn Python the Hard Way as an example.)

I don't think I've ever copied a textbook word-for-word, and I never copied computer programs from "Byte" magazine, but I do have similar experiences in note taking. I took elaborate notes all through high school, college, and grad school. In grad school, I usually rewrote all of my class notes -- by hand; no home PC -- as I reviewed them in the day or two after class. My clean, rewritten notes had other benefits, too. In a graduate graph algorithms course, they drew the attention of a classmate who became one of my best friends and were part of what attracted the attention of the course's professor, who asked me to consider joining his research group. (I was tempted... Graph algorithms was one of my favorite courses and research areas!)

I'm not sure many students these days benefit from this low-tech strategy. Most students who take detailed notes in my course seem to type rather than write which, if what I've read is correct, has fewer cognitive advantages. But at least those students are engaging with the material consciously. So few students seem to take detailed notes at all these days, and that's a shame. Without notes, it is harder to review ideas, to remember what they found challenging or puzzling in the moment, and to rehearse what they encounter in class into their long-term memories. Then again, maybe I'm just having a "kids these days" moment.

Anyway, I applaud DeChambeau for saving his parents a few dollars and for the achievement of copying an entire physics text. He even realized, perhaps after the fact, that it was an excellent learning strategy.

(The above passage is from The 11 Most Unusual Things About Bryson DeChambeau. He sounds like an interesting guy.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 31, 2018 3:06 PM

Reflection on a Friday

If you don't sit facing the window, you could be in any town.

I read that line this morning in Maybe the Cumberland Gap just swallows you whole, where it is a bittersweet observation of the similarities among so many dying towns across Appalachia. It's a really good read, mostly sad but a little hopeful, that applies beyond one region or even one country.

My mind is self-centered, though, and immediately reframed the sentence in a way that cast light on my good fortune.

I just downloaded a couple of papers on return-oriented programming so that I can begin working with an undergraduate on an ambitious research project. I have a homework assignment to grade sitting in my class folder, the first of the semester. This weekend, I'll begin to revise a couple of lectures for my compiler course, on NFAs and DFAs and scanning text. As always, there is a pile of department work to do on my desk and in my mind.

I live in Cedar Falls, Iowa, but if I don't sit facing the window, I could be in Ames or Iowa City, East Lansing or Durham, Boston or Berkeley. And I like the view out of my office window very much, thank you, so I don't even want to trade.

Heading into a three-day weekend, I realize again how fortunate I am. Do I put my good fortune to good enough use?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

August 17, 2018 2:19 PM

LangSec and My Courses for the Year

As I a way to get into the right frame of mind for the new semester and the next iteration of my compiler course, I read Michael Hicks's Software Security is a Programming Languages Issue this morning. Hicks incorporates software security into his courses on the principles of programming languages, with two lectures on security before having students study and use Rust. The article has links to lecture slides and supporting material, which makes it a post worth bookmarking.

I started thinking about adding LangSec to my course late in the spring semester, as I brainstormed topics that might spice the rest of the course up for both me and my students. However, time was short, so I stuck with a couple of standalone sessions on topics outside the main outline: optimization and concatenative languages. They worked fine but left me with an itch for something new.

I think I'll use the course Hicks and his colleagues teach as a starting point for figuring out how I might add to next spring's course. Students are interested in security, it's undoubtedly an essential issue for today's grads, and it is a great way to demonstrate how the design of programming languages is more than just the syntax of a loop or the lambda calculus.

Hicks's discussion of Rust also connects with my fall course. Two years ago, an advanced undergrad used Rust as the implementation language for his compiler. He didn't know the language but wanted to pair it with Haskell in his toolbox. The first few weeks of the project were a struggle as he wrestled with mastering ownership and figuring out some new programming patterns. Eventually he hit a nice groove and produced a working compiler with only a couple of small holes.

I was surprised how easy it was for me install the tools I needed to compile, test, and explore his code. That experience increased my interest in learning the language, too. Adding it to my spring course would give me the last big push I need to buckle down.

This summer has been a blur of administrative stuff, expected and unexpected. The fall semester brings the respite of work I really enjoy: teaching compilers and writing some code. Hurray!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 09, 2018 1:03 PM

Gerald Weinberg Has Passed Away

I just read on the old Agile/XP mailing list that Jerry Weinberg passed away on Tuesday, August 7. The message hailed Weinberg as "one of the finest thinkers on computer software development". I, like many, was a big fan of work.

My first encounter with Weinberg came in the mid-1990s when someone recommended The Psychology of Computer Programming to me. It was already over twenty years old, but it captivated me. It augmented years of experience in the trenches developing computer software with a deep understanding of psychology and anthropology and the firm but gentle mindset of a gifted teacher. I still refer back to it after all these years. Whenever I open it up to a random page, I learn something new again. If you've never read it, check it out now. You can buy the ebook -- along with many of Weinberg's books -- online through LeanPub.

After the first book, I was hooked. I never had the opportunity to attend one of Weinberg's workshops, but colleagues lavished them with praise. I should have made more of an effort to attend one. My memory is foggy now, but I do think I exchanged email messages with him once back in the late 1990s. I'll have to see if I can dig them up in one of my mail archives.

Fifteen years ago or so, I picked up a copy of Introduction to General Systems Thinking tossed out by a retiring colleague, and it became the first in a small collection of Weinberg books now on my shelf. As older colleagues retire in the coming years, I would be happy to salvage more titles and extend my collection. It won't be worth much on the open market, but perhaps I'll be able to share my love of Weinberg's work with students and younger colleagues. Books make great gifts, and more so a book by Gerald Weinberg.

Perhaps I'll share them with my non-CS friends and family, too. A couple of summers back, my wife saw a copy of Are Your Lights On?, a book Weinberg co-wrote with Donald Gause, sitting on the floor of my study at home. She read it and liked it a lot. "You get to read books like that for your work?" Yes.

I just read Weinberg's final blog entry earlier this week. He wasn't a prolific blogger, but he wrote a post every week or ten days, usually about consulting, managing, and career development. His final post touched on something that we professors experience at least occasionally: students sometimes solve the problems we et before them better than we expected, or better than we ourselves can do. He reminded people not to be defensive, even if it's hard, and to see the situation as an opportunity to learn:

When I was a little boy, my father challenged me to learn something new every day before allowing myself to go to bed. Learning new things all the time is perhaps the most important behavior in my life. It's certainly the most important behavior in our profession.

Weinberg was teaching us to the end, with grace and gratitude. I will miss him.

Oh, and one last personal note: I didn't know until after he passed that we shared the same birthday, a few years apart. A meaningless coincidence, of course, but it made me smile.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 31, 2018 4:23 PM

Software Projects, Potential Employers, and Memories

I spent a couple of hours this morning at a roundtable discussion listening to area tech employers talk about their work and their companies' needs. It was pretty enjoyable (well, except perhaps for the CEO who too frequently prefaced his remarks with "What the education system needs to understand is ..."). To a company, they all place a lot of value on the projects that job candidates have done. Their comments reminded me of an old MAA blog post in which a recent grad said:

During the fall of my junior year, I applied for an internship at Red Ventures, a data analytics and technology company just outside Charlotte. Throughout the rigorous interview process, it wasn't my GPA that stood out. I stood out among the applicants, in part, because I was able to discuss multiple projects I had taken ownership of and was extremely passionate about.

I encourage this mentality in my students, though I think "passionate about" is too strong a condition (not to mention cliché). Students should have a few projects that they are interested in, or proud of, or maybe just completed.

Most of the students taking my compiler course this fall won't be applying for a compiler job when they graduate, but they will have written a compiler as part of a team. They will have met a spec, collaborated on code, and delivered a working product. That is evidence of skill, to be sure, but also of hard work and persistence. It's a significant accomplishment.

The students who take our intelligent systems course or our real-time embedded systems will be able to say the same thing. Some students will also be able to point to code they wrote for a club or personal projects. They key is to build things, care about them, and "deliver", whatever that means in the context of that particular project.

I made note of one new piece of advice to give our students, offered by a former student I mentioned in a blog post many years ago who is now head of a local development team for mobile game developer Jam City: Keep all the code you write. It can be a GitHub repo, as many people now recommend, but it doesn't have to be. A simple zip file organized by courses and projects can be enough. Such a portfolio can show prospective employers what you've done, how you've grown, and how much you care about the things you make. It can say a lot.

You might even want to keep all that code for Future You. I'm old enough that it was difficult to keep digital copies of all the code I wrote in college. I have a few programs from my undergrad days and a few more from grad school, which have migrated across storage media as time passed, but I missing much of my class work as a young undergrad and all of the code I wrote in high school. I sometimes wish I could look back at some of that code...


Posted by Eugene Wallingford | Permalink | Categories: Personal, Software Development, Teaching and Learning

July 17, 2018 2:32 PM

Get Attached to Solving Problems for People

In Getting Critiqued, Adam Morse reflects on his evolution from art student to web designer, and how that changed his relationship with users and critiques. Artists create things in which they are, at some level, invested. Their process matters. As a result, critiques, however well-intentioned, feel personal. The work isn't about a user; it's about you. But...

... design is different. As a designer, I don't matter. My work doesn't matter. Nothing I make matters in the context of my process. It's all about the people you are building for. You're just trying to solve problems for people. Once you realize this, it's the most liberating thing.

Now, criticism isn't really about you as artist. It's about how well the design meets the needs of the user. With that in mind, the artist can put some distance between himself or herself and think about the users. That's probably what the users are paying for anyway.

I've never been a designer, but I was fortunate to learn how better to separate myself from my work by participating in the software patterns community and its writers' workshop format. From the workshops, I came to appreciate the value of providing positive and constructive feedback in a supportive way. But I also learned to let critiques from others be about my writing and not about me. The ethos of writers' workshops is one of shared commitment to growth and so creates as supportive framework as possible in which to deliver suggestions. Now, even when I'm not in such an conspicuously supportive environment, I am better able to detach myself from my work. It's never easy, but it's easier. This mindset can wear off a bit over time, so I find an occasional inoculation via PLoP or another supportive setting to be useful.

Morse offers another source of reminder: the designs we create for the web -- and for most software, too-- are not likely to last forever. So...

Don't fall in love with borders, gradients, a shade of blue, text on blurred photos, fancy animations, a certain typeface, flash, or music that autoplays. Just get attached to solving problems for people.

That last sentence is pretty good advice for programmers and designers alike. If we detach ourselves from our specific work output a bit and instead attach ourselves to solving problems for other people, we'll be able to handle their critiques more calmly. As a result, we are also likely to do better work.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

June 29, 2018 11:46 AM

Computer Science to the Second Degree

Some thoughts on studying computer science from Gian-Carlo Rota:

A large fraction of MIT undergraduates major in computer science or at least acquire extensive computer skills that are applicable in other fields. In their second year, they catch on to the fact that their required courses in computer science do not provide the whole story. Not because of deficiencies in the syllabus; quite the opposite. The undergraduate curriculum in computer science at MIT is probably the most progressive and advanced such curriculum anywhere. Rather, the students learn that side by side with required courses there is another, hidden curriculum consisting of new ideas just coming into use, new techniques and that spread like wildfire, opening up unsuspected applications that will eventually be adopted into the official curriculum.

Keeping up with this hidden curriculum is what will enable a computer scientist to stay ahead in the field. Those who do not become computer scientists to the second degree risk turning into programmers who will only implement the ideas of others.

MIT is, of course, an exceptional school, but I think Rota's comments apply to computer science at most schools. So much learning of CS happens in the spaces between courses: in the lab, in the student lounge, at meetings of student clubs, at part-time jobs, .... That can sometimes be a challenge for students who don't have much curiosity, or develop one as they are exposed to new topics.

As profs, we encourage students to be aware of all that is going on in computer science beyond the classroom and to take part in the ambient curriculum to the extent they are able. Students who become computer scientists only to the first degree can certainly find good jobs and professional success, but there are more opportunities open at the second degree. CS can also be a lot more fun there.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 03, 2018 10:50 AM

Two Thoughts on Teaching

... from my morning reading.

First, a sentence from Bryan Caplan, about one of his influences, philosopher Michael Huemer:

I think what's great about this book, and really all of Mike's work, is he always tries to start off with premises that make sense to people who don't already agree, and then try to get somewhere.

I value people who take the time to construct arguments in this way. It's surprisingly rare in academic discourse and public discourse. Teachers usually learn pretty quickly, though, that the most effective way to teach is start where your students are: recognize the state of their knowledge and respect their current beliefs. I try to remind myself of this principle regularly during a course, or I'm likely to go off track.

Second, the closing exchange from a 1987 interview with Stanley Kubrick. Kubrick has been talking about how the critics' views of his films tend to evolve over time. The interviewer wrapped up the conversation with:

Well, you don't make it easy on viewers or critics. You create strong feelings, but you won't give us any easy answers.

That's because I don't have any easy answers.

That seems like a pretty good aspiration to have for teaching, that people can say it creates strong feelings but doesn't give any easy answers. Much of teaching is simpler than this, of course, especially in a field such as computer science. A closure is something that we can understand as it is, as is, say, an algorithm for parsing a stream of tokens. But after you learn a few concepts and start trying to build or understand a complex system, easy answers are much harder to come by. Even so, I do hope that students leave my courses with strong feelings about their craft. Those feelings may not match my own, and they'll surely still be evolving, but they will be a product of the student engaging with some big ideas and trying them out on challenging problems.

Maybe if I keep reading interested articles on the exercise the bike and making connections to my craft, I can get this computer science thing down better.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 01, 2018 3:05 PM

Prepare to Appreciate the Solution

This post isn't really about chess, though it might seem at first to be.

In The Reviled Art, chess grandmaster Stuart Rachels says that most grandmasters don't like composed chess problems because they are too difficult. It's easy to imagine why average chessplayers find problems too difficult: they aren't all that great chess. But why grandmasters? Rachels contends that problems are hard for tournament players because they are counterintuitive: the solutions contradict the intuitions developed by players whose chess skill is developed and sharpened over the board.

Rachels then says:

Most problems stump me too, so I conceive of the time I spend looking at them as time spent preparing to appreciate their solutions -- not as time spent trying to solve them.

I love this attitude. If I view time spent banging my head against a puzzle or a hard problem as "trying to solve the problem", then not solving the problem might feel like failure. If I view that time as "preparing to appreciate the solution", then I can feel as if my time was well spent even if I don't solve it -- as long as I can appreciate the beauty or depth or originality of the solution.

This attitude is helpful outside of chess. Maybe I'm trying to solve a hard programming problem or trying to understand a challenging area of programming language theory that is new to me. I don't always solve the problem on my own or completely understand the new area without outside help or lots of time reading and thinking. But I often do appreciate the solution once I see it. All the time I spent working on the problem prepared me for that moment.

I often wish that more of my students would adopt Rachels's attitude. I frequently pose a problem for them to work on for a few minutes before we look at a solution, or several candidates, as a group. All too often some students look at the problem, think it's too difficult, and then just sit there waiting for me to show them the answer. This approach often results in them feeling two kinds of failure: they didn't solve the problem, and they don't even appreciate the solution when they see it. They haven't put in the work thinking about it that prepares their minds to really get the solution. Maybe I can do more to help students realize that the work is worth worth the effort even if they don't think they can solve the problem. Send me your suggestions!

Rachels's point about the counterintuitiveness of composed chess problems indicates another way in which trying to solve unorthodox problems can be worthwhile. Sometimes our intuitions let us down because they are too narrow, or even wrong. Trying to solve an unorthodox problem can help us broaden our thinking. My experience with chess compositions is that most of the ideas I need to solve them will not be helpful in over-the-board play; those kinds of positions simply don't occur in real games. But a few themes do apply, and practicing with them helps me learn how to play better in game situations. If nothing else, working on unorthodox problems reminds me to look outside the constraints of my intuitions sometimes when a problem in real life seems too hard.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 18, 2018 1:24 PM

Sharing Control

Sidney Lumet, in his book Making Movies, writes:

Arthur Miller's first, and I think, only novel, Focus, was, in my opinion, every bit as good as his first produced play, All My Sons. I once asked him why, if he was equally talented in both forms, he chose to write plays. Why would he give up the total control of the creative process that a novel provides to write instead for communal control, where a play would first go into the hands of a director and then pass into the hands of a cast, set designer, producer, and so forth? His answer was touching. He loved seeing what his work evoked in others. The result could contain revelations, feelings, and ideas that he never knew existed when he wrote the play. That's what he hoped for.

Writing software for people to use is something quite different from writing a play for audiences to watch, but this paragraph brought to mind experiences I had as a grad student and new faculty member. As a part of my doctoral work, I implemented a expert system shells for a couple of problem-solving styles. Experts and grad students in domains such as chemical engineering, civil engineering, education, manufacturing, and tax accounting used these shells to build expert systems in their domains. I often found myself in the lab with these folks as they used my tools. I learned a lot by watching them and discussing with them the languages implemented in the tools. Their comments and ideas sometimes changed how I thought about the languages and tools, and I was able to fold some of these changes back into the systems.

Software design can be communal, too. This is, of course, one of the cornerstones of agile software development. Giving up control can help us write better software, but it can also be a source of the kind of pleasure I imagine Miller got from working to bring his plays to life on stage.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 09, 2018 4:02 PM

Middles

In an old blog post promoting his book on timing, Daniel Pink writes:

... Connie Gersick's research has shown that group projects rarely progress in a steady, linear way. Instead, at the beginning of a project, groups do very little. Then at a certain moment, they experience a sudden burst of activity and finally get going. When is that moment? The temporal midpoint. Give a team 34 days, they get started in earnest on day 17. Give a team 11 days, they get really get going on day 6. In addition, there’s other research showing that being behind at the midpoint--in NBA games and in experimental settings--can boost performance in the second half.

So we need to recognize midpoints and try to use them as a spark rather than a slump.

I wonder if this research suggests that we should favor shorter projects over longer ones. If most of us start going full force only at the middle of our projects, perhaps we should make the middle of our projects come sooner.

I'll admit that I have a fondness for short over long: short iterations over long iterations in software development, quarters over semesters in educational settings, short books (especially non-fiction) over long books. Shorter cycles seem to lead to higher productivity, because I spend more time working and less time ramping up and winding down. That seems to be true for my students and faculty colleagues, too.

In the paragraph that follows the quoted passage, Pink points inadvertently to another feature of short projects that I appreciate: more frequent beginnings and endings. He talks about the poignancy of endings, which adds meaning to the experience. On the other end of the cycle are beginnings, which create a sense of newness and energy. I always look forward to the beginning of a new semester or a new project for the energy it brings me.

Agile software developers know that, on top of these reasons, short projects offer another potent advantage: more opportunities to take stock of what we have learned and feed that learning back into what we do.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

May 08, 2018 4:45 PM

"Tell Me Something You Learned"

I spent big chunks of the last two days grading final projects and final exams for my Programming Languages course. Grading exams is important but not a lot of fun, so I released a little tension on Twitter as thoughts popped into my head:

I love final exam answers that appear to be randomly generated from a list of terms learned in the course.

Also fun: code that appears to be randomly generated from a list of functions learned in the course.

... and then a student surprises me with a creative, efficient solution. I like those answers just as much.

Answer of the day so far: "Local variables help us be more ambiguous solving many problems at once."

I don't know what that means, but I love it.

I hope folks did not think I was "punching down". I know how stressful final exams are for most students, and I know how challenging a comprehensive final exam in this course is. Students have to write code, explain ideas, and connect ideas to code. It's tough. The tweets were just me having a little fun during what is otherwise a slow, laborious process.

This exam ended differently than any of the previous finals in this course. The final question, worth 5% of the exam grade, was this:

Identify one topic from the course that is not covered by an exam question but about which you learned something valuable. Write two to three sentences about what you learned.

I used to include a more wide-ranging version of this question to end my AI final exams many years ago, in which students had a chance to write a summary of the important ideas they had learned in the course. I'm not sure what reminded of the idea (perhaps one of James Tanton's essays), but this seemed like a nice way to give students a chance to boost their grades without a curve. I figured I would be generous and give them some latitude with their own experiences. My hope was that students could end the exam on a good note, feeling positive about something they learned and appreciated rather than solving yet another problem I fished out of fifteen weeks of material. There was a small risk -- what if a bunch of them panicked at the unusual question and couldn't think of anything to say? But that risk exists for almost any question I ask.

The question seems to have gone over quite well. Some students talked about a specific topic from the course, among them variable arity functions, currying, higher-order procedures, and syntactic abstraction. That's mostly what I had in mind when I wrote the question, even if some of their answers were covered in part somewhere else on the exam. Others answered more generally than I expected. A couple talked about how the course gave them a deeper appreciation for data abstraction; a couple of others wrote about the experience of writing an interpreter and having to live with their design decisions as the code as it grew over three weeks. All but one student wrote an answer substantive and reflective enough that I didn't even have to think about how many points to award. I was happy to read them.

I really shouldn't have been surprised. Most students care more about their learning, and get more out of a class, than exam answers and classroom participation might indicate. They face a lot of pressures, and as a result have a limited amount of time and energy to express about any one course day in and day out. But making software matters to most of them; sometimes even big ideas matter. This question let them express some of what made the class work for them.

This problem had unexpected effect... I ended the exam on a good note. I put down my grading pen feeling good about the course, knowing that each student learned something that stood out to them, that they appreciated enough to write a few sentences about. I got a small glimpse of how they changed as a result of the course. I put the question on the exam for the students' sake, but it affected me as much as it affected them.

That's not a bad way to end the semester.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 17, 2018 4:25 PM

The Tension Among Motivation, Real Problems, and Hard Work

Christiaan Huygens's first pendulum clock

In today's edition of "Sounds great, but...", I point you to Learn calculus like Huygens, a blog entry that relates some interesting history about the relationship between Gottfried Leibniz and Christiaan Huygens, "the greatest mathematician in the generation before Newton and Leibniz". It's a cool story: a genius of the old generation learns a paradigm-shifting new discipline via correspondence with a genius of the new generation who is in the process of mapping the discipline.

The "Sounds great, but..." part comes near the end of the article, when the author extrapolates from Huygens's attitude to what is wrong with math education these days. It seems that Huygens wanted to see connections between the crazy new operations he was learning, including second derivatives, and the real world. Without seeing these connections, he wasn't as motivated to put in the effort to learn them.

The author then asserts:

The point is not that mathematics needs to be applied. It is that it needs to be motivated. We don't study nature because we refuse to admit value in abstract mathematics. We study nature because she has repeatedly proven herself to have excellent mathematical taste, which is more than can be said for the run-of-the-mill mathematicians who have to invent technical pseudo-problems because they can't solve any real ones.

Yikes, that got ugly fast. And it gets uglier, with the author eventually worrying that we alienate present-day Huygenses with a mass of boring problems that are disconnected from reality.

I actually love the heart of that paragraph: We don't study nature because we refuse to admit value in abstract mathematics. We study nature because she has repeatedly proven herself to have excellent mathematical taste.... This is a reasonable claim, and almost poetic. But the idea that pseudo-problems invented by run-of-the-mill mathematicians are the reason students today aren't motivated to learn calculus or other advanced mathematics seems like a massive overreach.

I'm sympathetic to the author's position. I watched my daughters slog through AP Calculus, solving many abstract problems and many applied problems that had only a thin veneer of reality wrapped around them. As someone who enjoyed puzzles for puzzles' sake, I had enjoyed all of my calculus courses, but it seemed as if my daughters and many of their classmates never felt the sort of motivation that Huygens craved and Leibniz delivered.

I also see many computer science students slog through courses in which they learn to program, apply computational theory to problems, and study the intricate workings of software and hardware systems. Abstract problems are a fine way to learn how to program, but they don't always motivate students to put in a lot of work on challenging material. However, real problems can be too unruly for many settings, though, so simplified, abstract problems are common.

But it's not quite as easy to fix this problem by saying "learn calculus like Huygens: solve real problems!". There are a number of impediments to this being a straightforward solution in practice.

One is the need for domain knowledge. Few, if any, of the students sitting in today's calculus classes have much in common with Huygens, a brilliant natural scientist and inventor who had spent his life investigating hard problems. He brought a wealth of knowledge to his study of mathematics. I'm guessing that Leibniz didn't have to search long to find applications with which Huygens was already familiar and whose solutions he cared about.

Maybe in the old days all math students were learning a lot of science at the same time as they learned math, but that is not always so now. In order to motivate students with real problems, you need real problems from many domains, in hopes of hitting all students' backgrounds and interests. Even then, you may not cover them all. And, even if you do, you need lots of problems for them to practice on.

I think about these problems every day from the perspective of a computer science prof, and I think there are a lot of parallels between motivating math students and motivating CS students. How do I give my students problems from domains they both know something about and are curious enough to learn more about? How do I do that in a room with thirty-five students with as many different backgrounds? How do I do that in the amount of time I have to develop and extend my course?

Switching to a computer science perspective brings to mind a second impediment to the "solve real problems" mantra. CS education research offers some evidence that using context-laden problems, even from familiar contexts, can make it more difficult for students to solve programming problems. The authors of the linked paper say:

Our results suggest that any advantage conveyed by a familiar context is dominated by other factors, such as the complexity of terminology used in the description, the length of the problem description, and the availability of examples. This suggests that educators should focus on simplicity of language and the development of examples, rather than seeking contexts that may aid in understanding problems.

Using familiar problems to learn new techniques may help motivate students initially, but that may come at other costs. Complexity and confusion can be demotivating.

So, "learn calculus like Huygens" sounds great, but it's not quite so easy to implement in practice. After many years designing and teaching courses, I have a lot of sympathy for the writers of calculus and intro programming textbooks. I also don't think it gets much easier as students advance through the curriculum. Some students are motivated no matter what the instructor does; others need help. The tension between motivation and the hard work needed to master new techniques is always there. Claims that the tension is easy to resolve are usually too glib to be helpful.

The Huygens-Leibniz tale really is a cool story, though. You might enjoy it.

(The image above is a sketch of Christiaan Huygens's first pendulum clock, from 1657. Source: Wikipedia.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 06, 2018 3:19 PM

Maps and Abstractions

I've been reading my way through Frank Chimero's talks online and ran across a great bit on maps and interaction design in What Screens Want. One of the paragraphs made me think about the abstractions that show up in CS courses:

When I realized that, a little light went off in my head: a map's biases do service to one need, but distort everything else. Meaning, they misinform and confuse those with different needs.

CS courses are full of abstractions and models of complex systems. We use examples, often simplified, to expose or emphasize a single facet a system, as a way to help students cut through the complexity. For example, compilers and full-strength interpreters are complicated programs, so we start with simple interpreters operating over simple languages. Students get their feet wet without drowning in detail.

In the service of trying not to overwhelm students, though, we run the risk of distorting how they think about the parts we left out. Worse, we sometimes distort even their thinking about the part we're focusing on, because they don't see its connections to the more complete picture. There is an art to identifying abstractions, creating examples, and sequencing instruction. Done well, we can minimize the distortions and help students come to understand the whole with small steps and incremental increases in size and complexity.

At least that's what I think on my good days. There are days and even entire semesters when things don't seem to progress as smoothly as I hope or as smoothly as past experience has led me to expect. Those days, I feel like I'm doing violence to an idea when I create an abstraction or adopt a simplifying assumption. Students don't seem to be grokking the terrain, so change the map. We try different problems or work through more examples. It's hard to find the balance sometimes between adding enough to help and not adding so much as to overwhelm.

The best teachers I've encountered know how to approach this challenge. More importantly, they seem to enjoy the challenge. I'm guessing that teachers who don't enjoy it must be frustrated a lot. I enjoy it, and even so there are times when this challenge frustrates me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 23, 2018 3:45 PM

A New Way to Design Programming Languages?

Greg Wilson wrote a short blog post recently about why JavaScript isn't suitable for teaching Data Carpentry-style workshops. In closing, he suggests an unusual way to design programming languages:

What I do know is that the world would be a better place if language designers adopted tutorial-driven design: write the lessons that introduce newcomers to the language, then implement the features those tutorials require.

That's a different sort of TDD than I'm used to...

This is the sort of idea that causes me to do a double or triple take. At first, it has an appealing ring to it, when considering how difficult it is to teach most programming languages to novices. Then I think a bit and decide that it sounds crazy because, really, are we going to hamstring our languages by focusing on the struggles of beginners? But then it sits in my mind for a while and I start to wonder if we couldn't grow a decent language this way. It's almost like using the old TDD to implement new the TDD.

The PLT Scheme folks have designed a set of teaching languages that enable beginners to grow into an industry-strength language. That design project seems to have worked from the outsides in, with a target language in mind while designing the teaching languages. Maybe Wilson's idea of starting at the beginning isn't so crazy after all.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 22, 2018 4:05 PM

Finally, Some Good News

It's been a tough semester. On top of the usual business, there have been a couple of extra stresses. First, I've been preparing for the departure of a very good friend, who is leaving the university and the area for family and personal reasons. Second, a good friend and department colleague took an unexpected leave that turned into a resignation. Both departures cast a distant pall over my workdays. This week, though, has offered a few positive notes to offset the sadness.

Everyone seems to complain about email these days, and I certainly have been receiving and sending more than usual this semester, as our students and I adjust to the change in our faculty. But sometimes an email message makes my day better. Exhibit 1, a message from a student dealing with a specific issue:

Thank you for your quick and helpful response!
Things don't look so complicated or hopeless now.

Exhibit 2, a message from a student who has been taming the bureaucracy that arises whenever two university systems collide:

I would like to thank you dearly for your prompt and thorough responses to my numerous emails. Every time I come to you with a question, I feel as though I am receiving the amount of respect and attention that I wish to be given.

Compliments like these make it a lot easier to muster the energy to deal with the next batch of email coming in.

There has also been good news on the student front. I received email from a rep at a company in Madison, Wisconsin, where one of our alumni works. They are looking for developers to work in a functional programming environment and are having a hard time filling the positions locally, despite the presence of a large and excellent university in town. Our alum is doing well enough that the company would like to hire more from our department, which is doing a pretty good job, too.

Finally, today I spoke in person with two students who had great news about their futures. One has accepted an offer to join the Northwestern U. doctoral program and work in the lab of Kenneth Forbus. I studied Forbus's work on qualitative reasoning and analogical reasoning as a part of my own Ph.D. work and learned a lot from him. This is a fantastic opportunity. The other student has accepted an internship to work at PlayStation this summer, working on the team that develops the compilers for its game engines. He told me, "I talked a lot about the project I did in your course last semester during my interview, and I assume that's part of the reason I got an offer." I have to admit, that made me smile.

I had both of these students in my intro class a few years back. They would have succeeded no matter who taught their intro course, or the compiler course, for that matter, so I can't take any credit for their success. But they are outstanding young men, and I have had the pleasure of getting to know over the last four years. News of the next steps in their careers makes me feel good, too.

I think I have enough energy to make it to the end of the semester now.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

March 12, 2018 3:43 PM

Technology is a Place Where We Live

Yesterday morning I read The Good Room, a talk Frank Chimero gave last month. Early on in the talk, Chimero says:

Let me start by stating something obvious: in the last decade, technology has transformed from a tool that we use to a place where we live.

This sentence jumped off the page both for the content of the assertion and for the decade time frame with which he bounds it. In the fall of 2003, I taught a capstone course for non-majors that is part of my university's liberal arts core. The course, titled "Environment, Technology, and Society", brings students from all majors on campus together in a course near the end of their studies, to apply their general education and various disciplinary expertises to problems of some currency in the world. As you might guess from the title, the course focuses on problems at the intersection of the natural environment, technology, and people.

My offering of the course put on a twist on the usual course content. We focused on the man-made environment we all live in, which even by 2003 had begun to include spaces carved out on the internet and web. The only textbook for the course was Donald Norman's The Design of Everyday Things, which I think every university graduate should have read. The topics for the course, though, had a decided IT flavor: the effect of the Internet on everyday life, e-commerce, spam, intellectual property, software warranties, sociable robots, AI in law and medicine, privacy, and free software. We closed with a discussion of what an educated citizen of the 21st century ought to know about the online world in which they would live in order to prosper as individuals and as a society.

The change in topic didn't excite everyone. A few came to the course looking forward to a comfortable "save the environment" vibe and were resistant to considering technology they didn't understand. But most were taking the course with no intellectual investment at all, as a required general education course they didn't care about and just needed to check off the list. In a strange way, their resignation enabled them to engage with the new ideas and actually ask some interesting questions about their future.

Looking back now after fifteen years , the course design looks pretty good. I should probably offer to teach it again, updated appropriately, of course, and see where young people of 2018 see themselves in the technological world. As Chimero argues in his talk, we need to do a better job building the places we want to live in -- and that we want our children to live in. Privacy, online peer pressure, and bullying all turned out differently than I expected in 2003. Our young people are worse off for those differences, though I think most have learned ways to live online in spite of the bad neighborhoods. Maybe they can help us build better places to live.

Chimero's talk is educational, entertaining, and quotable throughout. I tweeted one quote: "How does a city wish to be? Look to the library. A library is the gift a city gives to itself." There were many other lines I marked for myself, including:

  • Penn Station "resembles what Kafka would write about if he had the chance to see a derelict shopping mall." (I'm a big Kafka fan.)
  • "The wrong roads are being paved in an increasingly automated culture that values ease."
Check the talk out for yourself.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

February 12, 2018 4:03 PM

How Do We Choose The Programming Languages We Love?

In Material as Metaphor, the artist Anni Albers talks about how she came to choose media in which she worked:

How do we choose our specific material, our means of communication? "Accidentally". Something speaks to us, a sound, a touch, hardness or softness, it catches us and asks us to be formed. We are finding our language, and as we go along we learn to obey their rules and their limits. We have to obey, and adjust to those demands. Ideas flow from it to us and though we feel to be the creator we are involved in a dialogue with our medium. The more subtly we are tuned to our medium, the more inventive our actions will become. Not listening to it ends in failure.

This expresses much the way I feel about different programming languages and styles. I can like them all, and sometimes do! I go through phases when one style speaks to me more than another, or when one language seems to be in sync with how I am thinking. When that happens, I find myself wanting to learn its rules, to conform so that I can reach a point where I feel creative enough to solve interesting problems in the language.

If I find myself not liking a language, it's usually because I'm not listening to it; I'm fighting back. When I first tried to learn Haskell, I refused to bend to its style of functional programming. I had worked hard to grok FP in Scheme, and I was so proud of my hard-won understanding that I wanted to impose it on the new language. Eventually, I retreated for a while, returned more humbly, and finally came to appreciate Haskell, if not master it deeply.

My experience with Smalltalk went differently. One summer I listened to what it was telling me, slowly and patiently, throwing code away and starting over several times on an application I was trying to build. This didn't feel like a struggle so much as a several-month tutoring session. By the end, I felt ideas flowing through me. I think that's the kind of dialogue Albers is referring to.

If I want to master a new programming language, I have to be willing to obey its limits and to learn how to use its strengths as leverage. This can be a conscious choice. It's frustrating when that doesn't seem to be enough.

I wish I could always will myself into the right frame of mind to learn a new way of thinking. Albers reminds us that often a language speaks to us first. Sometimes, I just have to walk away and wait until the time is right.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Software Development, Teaching and Learning

January 17, 2018 3:51 PM

Footnotes

While discussing the effective use of discontinuities in film, both motion within a context versus change of context, Walter Murch tells a story about... bees:

A beehive can apparently be moved two inches each night without disorienting the bees the next morning. Surprisingly, if it is moved two miles, the bees also have no problem: They are forced by the total displacement of their environment to re-orient their sense of direction, which they can do easily enough. But if the hive is moved two yards, the bees become fatally confused. The environment does not seem different to them, so they do not re-orient themselves, and as a result, they will not recognize their own hive when they return from foraging, hovering instead in the empty space where the hive used to be, while the hive itself sits just two yards away.

This is fascinating, as well being a really cool analogy for the choices movies editors face when telling a story on film. Either change so little that viewers recognize the motion as natural, or change enough that they re-orient their perspective. Don't stop in the middle.

What is even cooler to me is that this story appears in a footnote.

One of the things I've been loving about In the Blink of an Eye is how Murch uses footnotes to teach. In many books, footnotes contain minutia or references to literature I'll never read, so I skip them. But Murch uses them to tell stories that elaborate on or deepen his main point but which would, if included in the text, interrupt the flow of the story he has constructed. They add to the narrative without being essential.

I've already learned a couple of cool things from his footnotes, and I'm not even a quarter of the way into the book. (I've been taking time to mull over what I read...) Another example: while discussing the value of discontinuity as a story-telling device, Murch adds a footnote that connects this practice to the visual discontinuity found ancient Egyptian painting. I never knew before why the perspective in those drawings was so unusual. Now I do!

My fondness for Murch's footnotes may stem from something more than their informative nature. When writing up lecture notes for my students, I like to include asides, digressions, and links to optional readings that expand on the main arc of the story. I'd like for them to realize that what they are learning is part of a world bigger than our course, that the ideas are often deeper and have wider implications than they might realize. And sometimes I just like to entertain with a connection. Not all students care about this material, but for the ones who do, I hope they get something out of them. Students who don't care can do what I do in other books: skip 'em.

This book gives me a higher goal to shoot for when including such asides in my notes: elaborate without being essential; entice without disrupting.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 15, 2018 9:22 AM

The Cut

Walter Murch, in In the Blink of an Eye:

A vast amount of preparation, really, to arrive at the innocuously brief moment of decisive act: the cut -- the moment of transition from one shot to the next -- something that, appropriately enough, should look almost self-evidently simple and effortless, if it is even noticed at all.

This can apply to software development, I think, but I haven't thought about that yet. I read the passage at the beginning of a new semester, when my mind is filled with another offering of my programming languages course. So I've been thinking about how this quote works in the context of my course: the overall structure of the course as well as the structure of individual class sessions. The course consists of three major units, connected by a thread, with the third having its own substructure. Each session is a more cohesive story with its own microstructure: the flow of a lecture, the sequence of exercises students do, the sequence of examples that students see. Moments of transition are everywhere, at multiple scales.

When a session goes badly, or not as well as I'd hoped, I am quite aware of the cuts that did not work. They seem awkward, or ill-motivated, or jarring. The students notice some of these, too, but they don't always let me know of their disorientation right away. That's one benefit of building frequent exercises and review questions into a class session: at least I have a chance of finding out sooner when something isn't working the way I'd planned.

Reading Murch has given me a new vocabulary for thinking about transitions visually. In particular, I've been thinking about two basic types of transition:

  • one that signals motion within a context
  • one that signals a change of context
These are a natural part of any writer's job, but I've found it helpful to think about them more explicitly as I worked on class this week.

Charlie Brown's teacher drones on

For example, I've been trying to think more often about how one kind of cut can be mistaken for the other and how that might affect students. What happens when what I intend as a small move within a context seems so disconnected for students that they think I've changed contexts? What happens when what I intend as a big shift to a new topic sounds to students like the WAH-WAH-WAH of Charlie Brown's teacher? I can always erect massive signposts to signal transitions of various kinds, but that can be just as jarring to readers or listeners as unannounced cuts. It is also inelegant, because it fails to respect their ability to construct their own understanding of what they are learning.

Trying on Murch's perspective has not been a panacea. The first session of the course, the one with a new opening story, went well, though it needs a few more iterations to become good. My second session went less well. I tried to rearrange a session that already worked well, and my thinking about transitions was too self-conscious. The result was a synthesis of two threads that didn't quite work, leaving me feeling a bit jumbled myself by connections that were incomplete and jumps that seemed abrupt. Fortunately, I think I managed to recognize this soon enough in class that I was able to tell a more coherent story than my outline prepared me to tell. The outline needs a lot more work.

In the longer run, though, thinking about transitions more carefully should help me do a better job leading students in a fruitful direction. I'll keep at it.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 14, 2018 9:24 AM

Acceleration

This was posted on the Racket mailing list recently:

"The Little Schemer" starts slow for people who have programmed before, but seeing that I am only half-way through and already gained some interesting knowledge from it, one should not underestimate the acceleration in this book.

The Little Schemer is the only textbook I assign in my Programming Languages course. These students usually have only a little experience: often three semesters, two in Python and one in Java; sometimes just the two in Python. A few of the students who work in the way the authors intend have an A-ha! experience while reading it. Or maybe they are just lucky... Other students have only a WTF? experience.

Still, I assign the book, with hope. It's relatively inexpensive and so worth a chance that a few students can use it to grok recursion, along with a way of thinking about writing functions that they haven't seen in courses or textbooks before. The book accelerates from the most basic ideas of programming to "interesting" knowledge in a relatively short number of pages. Students who buy in to the premise, hang on for the ride, and practice the ideas in their own code soon find that they, too, have accelerated as programmers.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 07, 2018 10:25 AM

95:1

This morning, I read the first few pages of In the Blink of an Eye, an essay on film editing by Walter Murch. He starts by talking about his work on Apocalypse Now, which took well over a year in large part because of the massive amount of film Coppola shot: 1,250,000 linear feet, enough for 230 hours of running time. The movie ended up being about two hours and twenty-five minutes, so Murch and his colleagues culled 95 minutes of footage for every minute that made it into the final product. A more typical project, Murch says, has a ratio of 20:1.

Even at 20:1, Murch's story puts into clearer light the amount of raw material I create when designing a typical session for one of my courses. The typical session mixes exposition, examples, student exercises, and (less than I'd like to admit) discussion. Now, whenever I feel like a session comes up short of my goal, I will think back on Murch's 20:1 ratio and realize how much harder I might work to produce enough material to assemble a good session. If I want one of my sessions to be an Apocalypse Now, maybe I'll need to shoot higher.

This motivation comes at a favorable time. Yesterday I had a burst of what felt like inspiration for a new first day to my Programming Languages course. At the end of the brainstorm came what is now the working version of my opening line in the course: "In the beginning, there was assembly language.". Let's see if I have enough inspiration -- and make enough time -- to turn the idea into what I hope it can be: a session that fuels my students' imagination for a semester's journey through Racket, functional programming, and examining language ideas with interpreters.

I do hope, though, that the journey itself does not bring to mind Apocalypse Now.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 05, 2018 1:27 PM

Change of Terms

I received a Change of Terms message yesterday from one of my mutual fund companies, which included this unexpected note:

Direction to buy or sell Vanguard funds must be placed online or verbally and may no longer be submitted in writing.

I haven't mailed Vanguard or any other financial services company a paper form or a paper check in years, but still. When I was growing up, I never would have imagined that I would see the day when you could not mail a letter to a company in order to conduct financial business. Busy, busy, busy.

In the academic world, this is the time for another type change of terms, as we prepare to launch our spring semester semester on Monday. The temperatures in my part of the country the last two weeks make the name of the semester a cruel joke, but the hope of spring lives.

For me, the transition is from my compiler course to my programming languages course. Compilers went as well this fall as it has gone in a long time; I really wish I had blogged about it more. I can only hope that Programming Languages goes as well. I've been reading about some ways I might improve the course pedagogically. That will require me to change some old habits, but trying to do so is part of the fun of teaching. I intend to blog about my experiences with the new ideas. As I said, the hope of spring lives.

In any case, I get to write Racket code all semester, so at least I have that going for me, which is nice.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

December 28, 2017 8:46 AM

You Have to Learn That It's All Beautiful

In this interview with Adam Grant, Walter Jacobson talks about some of the things he learned while writing biographies of Benjamin Franklin, Albert Einstein, Steve Jobs, and Leonardo da Vinci. A common theme is that all four were curious and interested in a wide range of topics. Toward the end of the interview, Jacobson says:

We of humanities backgrounds are always doing the lecture, like, "We need to put the 'A' in 'STEM', and you've got to learn the arts and the humanities." And you get big applause when you talk about the importance of that.

But we also have to meet halfway and learn the beauty of math. Because people tell me, "I can't believe somebody doesn't know the difference between Mozart and Haydn, or the difference between Lear and Macbeth." And I say, "Yeah, but do you know the difference between a resistor and a transistor? Do you know the difference between an integral and a differential equation?" They go, "Oh no, I don't do math, I don't do science." I say, "Yeah, but you know what, an integral equation is just as beautiful as a brush stroke on the Mona Lisa." You've got to learn that they're all beautiful.

Appreciating that beauty made Leonardo a better artist and Jobs a better technologist. I would like for the students who graduate from our CS program to know some literature, history, and art and appreciate their beauty. I'd also like for the students who graduate from our university with degrees in literature, history, art, and especially education to have some knowledge of calculus, the Turing machine, and recombinant DNA, and appreciate their beauty.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

December 22, 2017 12:50 PM

The Power of an App or a Sideways Glance

This morning I read an interview with Steven Soderbergh, which is mostly about his latest project, the app/series, "Mosaic". A few weeks ago, "Mosaic" was released as an app in advance of its debut on HBO. Actually, that's not quite right. It is an app, which will also be released later in series form, and then only because Soderbergh needed money to finish the app version of the film. In several places, he talks about being a director in ways that made me think of being a university professor these days.

One was in terms of technology. There are moments in the "Mosaic" app when it offers the viewer an opportunity to digress and read a document, to flash back, or to flash forward. The interviewer is intrigued by the notion that a filmmaker would be willing to distract the viewer in this way, sending texts and pushing notifications that might disrupt the experience. Soderbergh responded:

My attitude was, "Look, we've gotten used to watching TV now with three scrolling lines of information at the bottom of the screen all the time. People do not view that stuff the same way that they would have viewed it 20 years ago."

... To not acknowledge that when you're watching something on your phone or iPad that there are other things going on around you is to be in denial. My attitude is, "Well, if they're going to be distracted by something, let it be me!"

I'm beginning to wonder if this wouldn't be a healthier attitude for us to have as university instructors. Maybe I should create an app that is my course and let students experience the material using what is, for them, a native mode of interaction? Eventually they'll have to sit down and do the hard work of solving problems and writing code, but they could come to that work in a different way. There is a lot of value in our traditional modes of teaching and learning, but maybe flowing into our students' daily experience with push requests and teaser posts would reach them in a different way.

Alas, I doubt that HBO will front me any money to make my app, so I'll have to seem other sources of financing.

On a more personal plane, I was struck by something that Soderbergh said about the power directors have over the people they work with:

What's also interesting, given the environment we're in right now, is that I typically spend the last quarter of whatever talk I'm giving [to future fillmakers] discussing personal character, how to behave, and why there should be some accepted standard of behavior when you interact with people and how you treat people. Particularly when you're in a position like that of a director, which is an incredibly powerful situation to be in, pregnant with all kinds of opportunity to be abusive.

... if you're in a position of power, you can look at somebody sideways and destroy their week, you know? You need to be sensitive to the kind of power that a director has on a set.

It took me years as a teacher to realize the effect that an offhand remark could have on a student. I could be lecturing in class, or chatting with someone in my office, and say something about the course, or about how I work, or about how students work or think. This sentence, a small part of a larger story, might not mean all that much to me, and yet I would learn later that it affected how the student felt about himself or herself for a week, or for the rest of the course, or even longer. This effect can be positive or negative, of course, depending on the nature of the remark. As Soderbergh says, it's worth thinking about how you behave when you interact with people, especially when you're in a position of relative authority, in particular as a teacher working with young people.

This applies to our time as a parent and a spouse, too. Some of my most regrettable memories over the years are of moments in which I made an offhand remark, thoughtlessly careless, that cut deep into the heart of my wife or one of my daughters. Years later, they rarely remember the moment or the remark, but I'm sad for the pain I caused in that moment and for any lingering effect it may have. The memory is hard for me to shake. I have to hope that the good things I have said and done during our time together meant as much. I can also try to do better now. The same holds true for my time working with students.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

December 20, 2017 3:51 PM

== false

While grading one last project for the semester, I ran across this gem:

    if checkInputParameters() == False:
       [...]

This code was not written by a beginning programmer, but a senior who likely will graduate in May. Sigh.

I inherited this course late in the semester, so it would be nice if I could absolve myself of responsibility for allowing a student to develop such a bad habit. But this isn't the first time I've seen this programming pattern. I've seen it in my courses and in other faculty's courses, in first-year courses and fourth-year courses. In fact, I was so certain that I blogged about the pattern before that I spent several minutes trolling the archive. Alas, I never found the entry.

Don't let anyone tell you that Twitter can't be helpful. Within minutes of tweeting my dismay, two colleagues suggested new strategies for dealing with this anti-pattern.

Sylvain Leroux invoked Star Trek:

James T. Kirk "Kobayashi Maru"-style solution: Hack CPython to properly parse that new syntax:

unless checkInputParameters(): ...

Maybe I'll turn this into a programming exercise for the next student in my compiler class who wanders down the path of temptation. Deep in my heart, though, I know that enterprising programmers will use the new type of statement to write this:

    unless checkInputParameters() == True:
       [...]

Agile Hulk suggested a fix:

    if (checkInputParameters() == False) == True:

This might actually be a productive pedagogical tack to follow: light-hearted and spot on, highlighting the anti-pattern's flaw by applying it to its own output. ("Anti-pattern eats its own dog food.") With any luck, some students will be enlightened, Zen-like. Others will furrow their brow and say "What?"

... and even if we look past the boolean crime in our featured code snippet, we really ought to take on that function name. 'Tis the season, though, and the semester is over. I'll save that rant for another time.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 14, 2017 3:52 PM

Thinking about Next Semester's Course Already

We are deep into fall semester. The three teams in my compilers course are making steady progress toward a working compiler, and I'm getting so excited by that prospect that I've written a few new programs for them to compile. The latest two work with Kaprekar numbers.

Yet I've also found myself thinking already quite a bit about my spring programming languages course.

I have not made significant changes to this course (which introduces students to Racket, functional programming, recursive programming over algebraic data types, and a few principles of programming languages) in several years. I don't know if I'm headed for a major re-design yet, but I do know that several new ideas are commingling in my mind and encouraging me to think about improvements to the course.

The first trigger was reading How to Switch from the Imperative Mindset, which approaches learning functional style explicitly as a matter establishing new habits. My students come to the course having learned an imperative style in Python, perhaps with some OO in Java thrown in. Most of them are not yet 100% secure in their programming skills, and the thought of learning a new style is daunting. They don't come to the course asking for a new set of habits.

One way to develop a new set of habits is to recognize the cues that trigger an old habit, learn a new response, and then rehearse that response until it becomes a new habit. The How to Switch... post echoes a style that I have found effective when teaching OOP to programmers with experience in a procedural language, and I'm thinking about how to re-tool part of my course to use this style more explicitly when teaching FP.

My idea right now is something like this. Start with simple examples from the students' experience processing arrays and lists of data. Then work through solutions in sequence, such as:

  1. first, use a loop of the sort with which they are familiar, the body of which acts on each item in the collection
  2. then, move the action into a function, which the loop calls for each item in the collection
  3. finally, map the function over the items in the collection

We can also do this with built-in functions, perhaps to start, which eliminates the need to write a user-defined function.

In effect, this refactors code that the students are already comfortable with toward common functional patterns. I can use the same sequence of steps for mapping, folding, and reducing, which will reinforce the thinking habits students need to begin writing FP code from the original cues. I'm only just beginning to think about this approach, but I'm quite comfortable using a "refactoring to patterns" style in class.

Going in this direction will help me achieve another goal I have in mind for next semester: making class sessions more active. This was triggered by my post-mortem of the last course offering. Some early parts of the course consist of too much lecture. I want to get students writing small bits of code sooner, but with more support for taking small, reliable steps.

Paired this change to what happens in class are changes to what happens before students come to class. Rather than me talking about so many things in class, I hope to have

  • students reading clear expositions of the material, in small units that are followed immediately by
  • students doing more experimentation on their own in Dr. Racket, learning from the experiments and learning find information about language features as they need them.

This change will require me to package my notes differently and also to create triggers and scaffolding for the students' experimentation before coming to class. I'm thinking of this as something like a flipped classroom, but with "watching videos" replaced by "playing with code".

Finally, this blog post triggered a latent desire to make the course more effective for all students, wherever they are on the learning curve. Many students come to the course at roughly same level of experience and comfort, but a few come in struggling from their previous courses, and a few come in ready to take on bigger challenges. Even those broad categories are only approximate equivalence classes; each student is at a particular point in the development we hope for them. I'd like to create experiences that can help students all of these students learn something valuable for them.

I've only begun to think about the ideas in that post. Right now, I'm contemplating two of ideas from the section on getting to know my students better: gathering baseline data early on that I can use to anchor the course, and viewing grading as planning. Anything that can turn the drudgery of grading into a productive part of the course for me is likely to improve my experience in the course, and that is likely to improved my students' experience, too.

I have more questions than answers at this point. That's part of the fun of re-designing a course. I expect that things will take better shape over the next six weeks or so. If you have any suggestions, email me or tweet to me at @wallingf.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

October 03, 2017 12:23 PM

Why Do CS Enrollments Surges End?

The opening talk of the CS Education Summit this week considered the challenges facing CS education in a time of surging enrollments and continued concerns about the diversity of the CS student population. In the session that followed, Eric Roberts and Jodi Tims presented data that puts the current enrollment surge into perspective, in advance of a report from the National Academy of Science.

In terms of immediate takeaway, Eric Roberts's comments were gold. Eric opened with Stein's Law: If something is unsustainable, it will stop. Stein was an economist whose eponymous law expresses one of those obvious truths we all seem to forget about in periods of rapid change: If something cannot go on forever, it won't. You don't have to create a program to make it stop. A natural corollary is: If it can't go on for long, you don't need a program to deal with it. It will pass soon.

Why is that relevant to the summit? Even without continued growth, current enrollments in CS majors is unsustainable for many schools. If the past is any guide, we know that many schools will deal with unsustainable growth by limiting the number of students who start or remain in their major.

Roberts has studied the history of CS boom-and-bust cycles over the last thirty years, and he's identified a few common patterns:

  • Limiting enrollments is how departments respond to enrollment growth. They must: the big schools can't hire faculty fast enough, and most small schools can't hire new faculty at all.

  • The number of students graduating with CS degrees drops because we limit enrollments. Students do not stop enrolling because the number of job opportunities goes down or any other cause.

    After the dot-com bust, there was a lot of talk about offshoring and automation, but the effects of that were short-term and rather small. Roberts's data shows that enrollment crashes do not follow crashes in job openings; they follow enrollment caps. Enrollments remain strong wherever they are not strictly limited.

  • When we limit enrollments, the effect is bigger on women and members of underserved communities. These students are more likely to suffer from impostor syndrome, stereotype bias, and other fears, and the increased competitiveness among students for fewer openings combines with discourages them from continuing.

So the challenge of booming enrollments exacerbates the challenge to increase diversity. The boom might decrease diversity, but when it ends -- and it will, if we limit enrollments -- our diversity rarely recovers. That's the story of the last three booms.

In order to grow capacity, the most immediate solution is to hire more professors. I hope to write more about that soon, but for now I'll mention only that the problem of hiring enough faculty to teach all of our students has at east two facets. The first is that many schools simply don't have the money to hire more faculty right now. The second is that there aren't enough CS PhDs to go around. Roberts reported that, of last year's PhD grads, 83% took positions at R1 schools. That leaves 17% for the rest of us. "Non-R1 schools can expect to hire a CS PhD every 27 years." Everyone laughed, but I could see anxiety on more than a few faces.

The value of knowing this history is that, when we go to our deans and provosts, we can do more than argue for more resources. We can show the effect of not providing the resources needed to teach all the students coming our way. We won't just be putting the brakes on local growth; we may be helping to create the next enrollment crash. At a school like mine, if we teach the people of our state that we can't handle their CS students, then the people of our state will send their students elsewhere.

The problem for any one university, of course, is that it can act only based on its own resources and under local constraints. My dean and provost might care a lot about the global issues of demand for CS grads and need for greater diversity among CS students. But their job is to address local issues with their own (small) pool of money.

I'll have to re-read the papers Roberts has written about this topic. His remarks certainly gave us plenty to think about, and he was as engaging as ever.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 02, 2017 12:16 PM

The Challenge Facing CS Education

Today and tomorrow, I am at a CS Education Summit in Pittsburgh. I've only been to Pittsburgh once before, for ICFP 2002 (the International Conference on Functional Programming) and am glad to be back. It's a neat city.

The welcome address for the summit was given by Dr. Farnam Jahanian, the interim president at Carnegie Mellon University. Jahanian is a computer scientist, with a background in distributed computing and network security. His resume includes a stint as chair of the CS department at the University of Michigan and a stint at the NSF.

Welcome addresses for conferences and workshops vary in quality. Jahanian gave quite a good talk, putting the work of the summit into historical and cultural context. The current boom in CS enrollments is happening at a time when computing, broadly defined, is having an effect in seemingly all disciplines and all sectors of the economy. What does that mean for how we respond to the growth? Will we see that the current boom presages a change to the historical cycle of enrollments in coming years?

Jahanian made three statements in particular that for me capture the challenge facing CS departments everywhere and serve as a backdrop for the summit:

  • "We have to figure out how to teach all of these students."

    Unlike many past enrollment booms, "all of these students" this time comprises two very different subsets: CS majors and non-majors. We have plenty of experience teaching CS majors, but how do you structure your curriculum and classes when you have three times as many majors? When numbers go up far enough fast enough, many schools have a qualitatively different problem.

    Most departments have far less experience teaching computer science (not "literacy") to non-majors. How do you teach all of these students, with different backgrounds and expectations and needs? What do you teach them?

  • "This is an enormous responsibility."

    Today's graduates will have careers for 45 years or more. That's a long time, especially in a world that is changing ever more rapidly, in large part due to our own discipline. How different are the long-term needs of CS majors and non-majors? Both groups will be working and living for a long time after they graduate. If computing remains a central feature of the world in the future, how we respond to enrollment growth now will have an outsized effect on every graduate. An enormous responsibility, indeed.

  • "We in CS have to think about impending cultural changes..."

    ... which means that we computer science folks will need to have education, knowledge, and interests much broader than just CS. People talk all the time about the value of the humanities in undergraduate education. This is a great example of why. One bit of good news: as near as I can tell, most of the CS faculty in this room, at this summit, do have interests and education bigger than just computer science (*). But we have to find ways to work these issues into our classrooms, with both majors and non-majors.

Thus the idea of a CS education summit. I'm glad to be here.

(*) In my experience, it is much more likely to find a person with a CS or math PhD and significant educational background in the humanities than to find a person with a humanities PhD and significant educational background in CS or math (or any other science, for that matter). One of my hopes for the current trend of increasing interest in CS among non-CS majors is that we an close this gap. All of the departments on our campuses, and thus all of our university graduates, will be better for it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 26, 2017 3:58 PM

Learn Exceptions Later

Yesterday, I mentioned rewriting the rules for computing FIRST and FOLLOW sets using only "plain English". As I was refactoring my descriptions, I realized that one of the reasons students have difficulty with many textbook treatments of the algorithms is that the books give complete and correct definitions of the sets upfront. The presence of X := ε rules complicates the construction of both sets, but they are unnecessary to understanding the commonsense ideas that motivate the sets. Trying to deal with ε too soon can interfere with the students learning what they need to learn in order to eventually understand ε!

When I left the ε rules out of my descriptions, I ended up with what I thought were an approachable set of rules:

  • The FIRST set of a terminal contains only the terminal itself.

  • To compute FIRST for a non-terminal X, find all of the grammar rules that have X on the lefthand side. Add to FIRST(X) all of the items in the FIRST set of the first symbol of each righthand side.

  • The FOLLOW set of the start symbol contains the end-of-stream marker.

  • To compute FOLLOW for a non-terminal X, find all of the grammar rules that have X on the righthand side. If X is followed by a symbol in the rule, add to FOLLOW(X) all of the items in the FIRST set of that symbol. If X is the last symbol in the rule, add to FOLLOW(X) all of the items in the FOLLOW set of the symbol on the rule's lefthand side.

These rules are incomplete, but they have offsetting benefits. Each of these cases is easy to grok with a simple example or two. They also account for a big chunk of the work students need to do in constructing the sets for a typical grammar. As a result, they can get some practice building sets before diving into the gnarlier details ε, which affects both of the main rules above in a couple of ways.

These seems like a two-fold application of the Concrete, Then Abstract pattern. The first is the standard form: we get to see and work with accessible concrete examples before formalizing the rules in mathematical notation. The second involves the nature of the problem itself. The rules above are the concrete manifestation of FIRST and FOLLOW sets; students can master them before considering the more abstract ε cases. The abstract cases are the ones that benefit most from using formal notation.

I think this is an example of another pattern that works well when teaching. We might call it "Learn Exceptions Later", "Handle Exceptions Later", "Save Exceptions For Later", or even "Treat Exceptions as Exceptions". (Naming things is hard.) It is often possible to learn a substantial portion of an idea without considering exceptions at all, and doing so prepares students for learning the exceptions anyway.

I guess I now have at least one idea for my next PLoP paper.

Ironically, writing this post brings to mind a programming pattern that puts exceptions up top, which I learned during the summer Smalltalk taught me OOP. Instead of writing code like this:

    if normal_case(x) then
       // a bunch
       // of lines
       // of code
       // processing x
    else
       throw_an_error
you can write:
    if abnormal_case(x) then
       throw_an_error

// a bunch // of lines // of code // processing x

This idiom brings the exceptional case to the top of the function and dispatches with it immediately. On the other hand, it also makes the normal case the main focus of the function, unindented and clear to the eye. It may look like this idiom violates the "Save Exceptions For Later" pattern, but code of this sort can be a natural outgrowth of following the pattern. First, we implement the function to do its normal business and makes sure that it handles all of the usual cases. Only then do we concern ourselves with the exceptional case, and we build it into the function with minimal disruption to the code.

This pattern has served me well over the years, far beyond Smalltalk.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 25, 2017 3:01 PM

A Few Thoughts on My Compilers Course

I've been meaning to blog about my compilers course for more than a month, but life -- including my compilers course -- have kept me busy. Here are three quick notes to prime the pump.

  • I recently came across Lindsey Kuper's My First Fifteen Compilers and thought again about this unusual approach to a compiler course: one compiler a week, growing last week's compiler with a new feature or capability, until you have a complete system. Long, long-time readers of this blog may remember me writing about this idea once over a decade ago.

    The approach still intrigues me. Kuper says that it was "hugely motivating" to have a working compiler at the end of each week. In the end I always shy away from the approach because (1) I'm not yet willing to adopt for my course the Scheme-enabled micro-transformation model for building a compiler and (2) I haven't figured out how to make it work for a more traditional compiler.

    I'm sure I'll remain intrigued and consider it again in the future. Your suggestions are welcome!

  • Last week, I mentioned on Twitter that I was trying to explain how to compute FIRST and FOLLOW sets using only "plain English". It was hard. Writing a textual description of the process made me appreciate the value of using and understanding mathematical notation. It is so expressive and so concise. The problem for students is that it is also quite imposing until they get it. Before then, the notation can be a roadblock on the way to understanding something at an intuitive level.

    My usual approach in class to FIRST and FOLLOW sets, as for most topics, is to start with an example, reason about it in commonsense terms, and only then to formalize. The commonsense reasoning often helps students understand the formal expression, thus removing some of its bite. It's a variant of the "Concrete, Then Abstract" pattern.

    Mathematical definitions such as these can motivate some students to develop their formal reasoning skills. Many people prefer to let students develop their "mathematical maturity" in math courses, but this is really just an avoidance mechanism. "Let the Math department fail them" may solve a practical problem, sometimes we CS profs have to bite the bullet and help our students get better when they need it.

  • I have been trying to write more code for the course this semester, both for my enjoyment (and sanity) and for use in class. Earlier, I wrote a couple of toy programs such as a Fizzbuzz compiler. This weekend I took a deeper dive and began to implement my students' compiler project in full detail. It was a lot of fun to be deep in the mire of a real program again. I have already learned and re-learned a few things about Python, git, and bash, and I'm only a quarter of the way in! Now I just have to make time to do the rest as the semester moves forward.

In her post, Kuper said that her first compiler course was "a lot of hard work" but "the most fun I'd ever had writing code". I always tell my students that this course will be just like that for them. They are more likely to believe the first claim than the second. Diving in, I'm remembering those feelings firsthand. I think my students will be glad that I dove in. I'm reliving some of the challenges of doing everything that I ask them to do. This is already generating a new source of empathy for my students, which will probably be good for them come grading time.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

August 26, 2017 12:02 PM

"You Are Here"

a graph relating understanding, writing, and time

This graph illustrates one of the problems that afflicts me as a writer. Too often, I don't have the confidence (or gumption) to start writing until I reach the X. By that time in the learning cycle, downhill momentum is significant. It's easier not to write, either because I figure what I have to say is old news or because my mind has moved on to another topic.

I am thankful that other people share their learning at the top of the curve.

~~~~

Sarah Perry. created the above image for one of her many fine essays. I came upon it in David Chapman's Ignorant, Irrelevant, and Inscrutable. The blue X is mine.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

August 14, 2017 1:42 PM

Papert 1: Mathophobia, Affect, Physicality, and Love

I have finally started reading Mindstorms. I hope to write short reflections as I complete every few pages or anytime I come across something I feel compelled to write about in the moment. This is the first entry in the imagined series.

In the introduction, Papert says:

We shall see again and again that the consequences of mathophobia go far beyond obstructing the learning of mathematics and science. The interact with other endemic "cultural toxins", for example, with popular theories of aptitudes, to contaminate peoples' images of themselves as learners. Difficulty with school math is often the first step of an invasive intellectual process that leads us all to define ourselves as bundles of aptitudes and ineptitudes, as being "mathematical" or "not mathematical", "artistic" or "not artistic", "musical" or "not musical", "profound" or "superficial", "intelligent" or "dumb". Thus deficiency becomes identity, and learning is transformed from the early child's free exploration of the world to a chore beset by insecurities and self-imposed restrictions.

This invasive intellectual process has often deeply affected potential computer science students long before they reach the university. I would love to see Papert's dream made real early enough that young people can imagine being a computer scientist earlier. It's hard to throw of the shackles after they take hold.

~~~~

The thing that sticks out as I read the first few pages of Mindstorms is its focus on the power of affect in learning. I don't recall conscious attention to my affect having much of a role in my education; it seems I was in a continual state of "cool, I get to learn something". I didn't realize at the time just what good fortune it was to have that as a default orientation.

I'm also struck by Papert's focus on the role of physicality in learning, how we often learn best when the knowledge has a concrete manifestation in our world. I'll have to think about this more... Looking back now, abstraction always seemed natural to me.

Papert's talk of love -- falling in love with the thing we learn about, but also with the thing we use to learn it -- doesn't surprise me. I know these feelings well, even from the earliest experiences I had in kindergarten.

An outside connection that I will revisit: Frank Oppenheimer's exploratorium, an aspiration I learned about from Alan Kay. What would a computational exploratorium look like?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 11, 2017 9:08 AM

Don't Say "Straightforward" So Often, Obviously

This bullet point from @jessitron's Hyperproductive Development really connected with me:

As the host symbiont who lives and breathes the system: strike the words "just", "easy", "obvious", "simple", and "straightforward" from your vocabulary. These words are contextual, and no other human shares your context.

My first experience coming to grips with my use of these words was not in software development, but in the classroom. "Obvious" has never been a big part of my vocabulary, but I started to notice a few years ago how often I said "just", "easy", and "simple" in class and wrote them in my lecture notes. Since then, I have worked hard to cut back sharply on my uses of these minimizers in both spoken and written interactions with my students. I am not always successful, of course, but I am more aware now and frequently catch myself before speaking, or in the act of writing.

I find that I still use "straightforward" quite often these days. Often, I use it to express contrast explicitly, something to the effect, "This won't necessarily be easy, but at least it's straightforward." By this I mean that some problem or task may require hard work, but at least the steps they need to perform should be clear. I wonder now, though, whether students always take it this way, even when expressed explicitly. Maybe they hear me minimizing the task head, not putting the challenge they face into context.

Used habitually, even with good intentions, a word like "straightforward" can become a crutch, a substitute minimizer. It lets me to be lazy when I try to summarize a process or to encourage students when things get difficult. I'm going to try this fall to be more sensitive to my use of "straightforward" and see if I can't find a better way in most situations.

As for the blog post that prompted this reflection, Hyperproductive Development summarizes as effectively as anything I've read the truth behind the idea that some programmers are so much more effective than others: "it isn't the developers, so much as the situation". It's a good piece, well worth a quick read.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 28, 2017 2:02 PM

The Need for Apprenticeship in Software Engineering Education

In his conversation with Tyler Cowen, Ben Sasse talks a bit about how students learn in our schools of public policy, business, and law:

We haven't figured out in most professional schools how to create apprenticeship models where you cycle through different aspects of what doing this kind of work will actually look like. There are ways that there are tighter feedback loops at a med school than there are going to be at a policy school. There are things that I don't think we've thought nearly enough about ways that professional school models should diverge from traditional, theoretical, academic disciplines or humanities, for example.

We see a similar continuum in what works best, and what is needed, for learning computer science and learning software engineering. Computer science education can benefit from the tighter feedback loops and such that apprenticeship provides, but it also has a substantial theoretical component that is suitable for classroom instruction. Learning to be a software engineer requires a shift to the other end of the continuum: we can learn important things, in the classroom, but much of the important the learning happens in the trenches, making things and getting feedback.

A few universities have made big moves in how they structure software engineering instruction, but most have taken only halting steps. They are often held back by a institutional loyalty to the traditional academic model, or out of sheer curricular habit.

The one place you see apprenticeship models in CS is, of course, graduate school. Students who enter research work in the lab under the mentorship of faculty advisors and more senior grad students. It took me a year or so in graduate school to figure out that I needed to begin to place more focus on my research ideas than on my classes. (I hadn't committed to a lab or an advisor yet.)

In lieu of a changed academic model, internships of the sort I mentioned recently can be really helpful for undergrad CS students looking to go into software development. Internships create a weird tension for faculty... Most students come back from the workplace with a new appreciation for the academic knowledge they learn in the classroom, which is good, but they also back to wonder why more of their schoolwork can't have the character of learning in the trenches. They know to want more!

Project-based courses are a way for us to bring the value of apprenticeship to the undergraduate classroom. I am looking forward to building compilers with ten hardy students this fall.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 27, 2017 1:36 PM

How can we help students overcome "naturalness bias"?

In Leadership as a Performing Art, Ed Batista discusses, among other things, a "naturalness bias" that humans have when evaluating one another. Naturalness is "a preference for abilities and talents that we perceive as innate over those that appear to derive from effort and experience". Even when people express a preference for hard work and experience, they tend to judge more positively people who seem to be operating on natural skill and talent. As Batista notes, this bias affects not only how we evaluate others but also how we evaluate ourselves.

As I read this article, I could not help but think about how students who are new to programming and to computer science often react to their own struggles in an introductory CS course. These thoughts reached a crescendo when I came to these words:

One commonly-held perspective is that our authentic self is something that exists fully formed within us, and we discover its nature through experiences that feel more (or less) natural to us. We equate authenticity with comfort, and so if something makes us feel uncomfortable or self-conscious, then it is de facto inauthentic, which means we need not persist at it (or are relieved of our responsibility to try). But an alternative view is that our authentic self is something that we create over time, and we play an active role in its development through experiences that may feel uncomfortable or unnatural, particularly at first. As INSEAD professor of organizational behavior Herminia Ibarra wrote in The Authenticity Paradox in 2015,

Because going against our natural inclinations can make us feel like impostors, we tend to latch on to authenticity as an excuse for sticking with what's comfortable... By viewing ourselves as works-in-progress and evolving our professional identities through trial and error, we can develop a personal style that feels right to us and suits our organizations' changing needs. That takes courage, because learning, by definition, starts with unnatural and often superficial behaviors that can make us feel calculating instead of genuine and spontaneous. But the only way to avoid being pigeonholed and ultimately become better leaders is to do the things that a rigidly authentic sense of self would keep us from doing.

So many CS students and even computing professionals report suffering from impostor syndrome, sometimes precisely because they compare their internal struggles to learn with what appears to be the natural ability of their colleagues. But, as Ibarra says, learning, by definition, starts with the unnatural. To be uncomfortable is, in one sense, to be in a position to learn.

How might we teachers of computer science help our students overcome the naturalness bias they unwittingly apply when evaluating their own work and progress? We need strategies to help students see that CS is something we do, not something we are. You can feel uncomfortable and still be authentic.

This distinction is at the foundation of Batista's advice to leaders and, I think, at the foundation of good advice to students. When students can distinguish between their behavior and their identity, they are able to manage more effectively the expectations they have of their own work.

I hope to put what I learned in this article to good use both for my students and myself. It might help me be more honest -- and generous -- to myself when evaluating my performance as a teacher and an administrator, and more deliberate in how I try to get better.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Teaching and Learning

July 21, 2017 3:47 PM

Some Thoughts from a Corporate Visit: Agility and Curriculum

Last Thursday, I spent a day visiting a major IT employer in our state. Their summer interns, at least three of whom are students in my department, were presenting projects they had developed during a three-day codejam. The company invited educators from local universities to come in for the presentations, preceded by a tour of the corporate campus, a meeting with devs who had gone through the internship program in recent years, and a conversation about how the schools and company might collaborate more effectively. Here are a few of my impressions from the visit.

I saw and heard the word "agile" everywhere. The biggest effects of the agility company-wide seemed to be in setting priorities and in providing transparency. The vocabulary consisted mostly of terms from Scrum and kanban. I started to wonder how much the programming practices of XP or other agile methodologies had affected software development practices there. Eventually I heard about the importance of pair programming and unit testing and was happy to know that the developers hadn't been forgotten in the move to agile methods.

Several ideas came to mind during the visit of things we might incorporate into our programs or emphasize more. We do a pretty good job right now, I think. We currently introduce students to agile development extensively in our software engineering course, and we have a dedicated course on software verification and validation. I have even taught a dedicated course on agile software development several times before, most recently in 2014 and 2010. Things we might do better include:

  • having students work on virtual teams. Our students rarely, if ever, work on virtual teams in class, yet this is standard operating procedure even within individual companies these days.

  • having students connect their applications programs to front and back ends. Our students often solve interesting problems with programs, but they don't always have to connect their solution to front ends that engage real users or to back ends that ultimately provide source data. There is a lot to learn in having to address these details.

  • encouraging students to be more comfortable with failure on projects. Schools tends to produce graduates who are risk-averse, because failure on a project in the context of a semester-long course might mean failure in the course. But the simple fact is that some projects fail. Graduates need to be able to learn from failure and create successes out of it. They also need to be willing to take risks; projects with risk are also where big wins come from, not to mention new knowledge.

Over the course of the day, I heard about many of the attributes this company likes to see in candidates for internships and full-time positions, among them:

  • comfort speaking in public
  • ability to handle, accept, and learn from failure
  • curiosity
  • initiative
  • a willingness to work in a wide variety of roles: development, testing, management, etc.
Curiosity was easily the most-mentioned desirable attribute. On the matter of working in a wide variety of roles, even the people with "developer" in their job title reported spending only 30% of their time writing code. One sharp programmer said, "If you're spending 50% of your time writing code, you're doing something wrong."

The codejam presentations themselves were quite impressive. Teams of three to six college students can do some amazing things in three days when they are engaged and when they have the right tools available to them. One theme of the codejam was "platform as a service", and students used a slew of platforms, tools, and libraries to build their apps. Ones that stood out because they were new to me included IBM BlueMix (a l´ AWS and Azure), Twilio ("a cloud platform for building SMS, voice and messaging apps"), and Flask ("a micro web framework written in Python"). I also saw a lot of node.js and lots and lots of NoSQL. There was perhaps a bias toward NoSQL in the tools that the interns wanted to learn, but I wonder if students are losing appreciation for relational DBs and their value.

Each team gave itself a name. This was probably my favorite:

   int erns;
I am a programmer.

All in tools, the interns used too many different tools for me to take note of. That was an important reminder from the day for me. There are so many technologies to learn and know how to use effectively. Our courses can't possibly include them all. We need to help students learn how to approach a new library or framework and become effective users as quickly as possible. And we need to have them using source control all the time, as ingrained habit.

One last note, if only because it made me smile. Our conversation with some of the company's developers was really interesting. At the end of the session, one of the devs handed out his business card, in case we ever wanted to ask him questions after leaving. I looked down at the card and saw...

Alan Kay's business card, redacted

... Alan Kay. Who knew that Alan was moonlighting as an application developer for a major financial services company in the Midwest? I'm not sure whether sharing a name with a titan of computer science is a blessing or a curse, but for the first time in a long while I enjoyed tucking a business card into my pocket.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 11, 2017 3:17 PM

Blogging as "Loud Thinking"

This morning, I tweeted a quote from Sherry Turkle's Remembering Seymour Papert that struck a chord with a few people: "Seymour Papert saw that the computer would make it easier for thinking itself to become an object of thought." Here is another passage that struck a chord with me:

At the time of the juggling lesson, Seymour was deep in his experiments into what he called 'loud thinking'. It was what he was asking my grandfather to do. What are you trying? What are you feeling? What does it remind you of? If you want to think about thinking and the real process of learning, try to catch yourself in the act of learning. Say what comes to mind. And don't censor yourself. If this sounds like free association in psychoanalysis, it is. (When I met Seymour, he was in analysis with Greta Bibring.) And if it sounds like it could you get you into personal, uncharted, maybe scary terrain, it could. But anxiety and ambivalence are part of learning as well. If not voiced, they block learning.

It occurred to me that I blog as a form of "loud thinking". I don't write many formal essays or finished pieces for my blog these days. Mostly I share thoughts as they happen and think out loud about them in writing. Usually, it's just me trying to make sense of ideas that cross my path and see where they fit in with the other things I'm learning. I find that helpful, and readers sometimes help me by sharing their own thoughts and ideas.

When I first read the phrase "loud thinking", it felt awkward, but it's already growing on me. Maybe I'll try to get my compiler students to do some loud thinking this fall.

By the way, Turkle's entire piece is touching and insightful. I really liked the way she evoked Papert's belief that we "love the objects we think with" and "think with the objects we love". (And not just because I'm an old Smalltalk programmer!) I'll let you read the rest of the piece yourself to appreciate both the notion and Turkle's storytelling.

Now, for a closing confession: I have never read Mindstorms. I've read so much about Papert and his ideas over the years, but the book has never made it to the top of my stack. I pledge to correct this egregious personal shortcoming and read it as soon as I finish the novel on my nightstand. Maybe I'll think out loud about it here soon.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 09, 2017 9:04 AM

What It's Like To Be A Scholar

I just finished reading Tyler Cowen's recent interview with historian Jill Lepore. When Cowen asks Lepore about E.B. White's classic Stuart Little, Lepore launches into a story that illustrates quite nicely what it's like to be a scholar.

First, she notes that she was writing a review of a history of children's literature and kept coming across throwaway lines of the sort "Stuart Little, published in 1945, was of course banned." This triggered the scholar's impulse:

And there's no footnote, no explanation, no nothing.

At the time, one of my kids was six, and he was reading Stuart Little, we were reading at night together, and I was like, "Wait, the story about the mouse who drives the little car and rides a sailboat across the pond in Central Park, that was a banned book? What do I not know about 1945 or this book? What am I missing?"

These last two sentences embody the scholar's orientation. "What don't I know about these two things I think I know well?"

And I was shocked. I really was shocked. And I was staggered that these histories of children's literature couldn't even identify the story. I got really interested in that question, and I did what I do when I get a little too curious about something, is I become obsessive about finding out everything that could possibly be found out.

Next comes obsession. Lepore then tells a short version of the story that became her investigative article for The New Yorker, which she wrote because sometimes I "just fall into a hole in the ground, and I can't get out until I have gotten to the very, very bottom of it."

Finally, three transcript pages later, Lepore says:

It was one of the most fun research benders I've ever been on.

It ends in fun.

You may be a scholar if you have this pattern. To me, one of the biggest downsides of becoming department head is having less time to fall down some unexpected hole and follow its questions until I reach the bottom. I miss that freedom.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Personal, Teaching and Learning

June 29, 2017 4:03 PM

A New Demo Compiler for My Course

a simple FizzBuzz program

Over the last couple of weeks, I have spent a few minutes most afternoons writing a little code. It's been the best part of my work day. The project was a little compiler.

One of the first things we do in my compiler course is to study a small compiler for a couple of the days. This is a nice way to introduce the stages of a compiler and to raise some of the questions that we'll be answering over the course of the semester. It also gives students a chance to see the insides of a working compiler before they write their own. I hope that this demystifies the process a little: "Hey, a compiler really is just a program. Maybe I can write one myself."

For the last decade or so, I have used a compiler called acdc for this demo, based on Chapter 2 of Crafting A Compiler by Fischer, Cytron, and LeBlanc. ac is a small arithmetic language with two types of numbers, sequences of assignment statements, and print statements. dc is a stack-based desk calculator that comes as part of many Unix installations. I whipped up a acdc compiler in Java about a decade ago and have used it ever since. Both languages have enough features to be useful as a demo but not enough to overwhelm. My hacked-up compiler is also open to improvements as we learn techniques throughout the course, giving us a chance to use them in the small before students applied them to their own project.

I've been growing dissatisfied with this demo for a while now. My Java program feels heavy, with too many small pieces to be simple enough for a quick read. It requires two full class sessions to really understand it well, and I've been hoping to shorten the intro to my course. ac is good, but it doesn't have any flow control other than sequencing, which means that it does not give us a way to look at assembly language generation with jumps and backpatching. On top of all that, I was bored with acdc; ten years is a long time to spend with one program.

This spring I stumbled on a possible replacement in The Fastest FizzBuzz in the West. It defines a simple source language for writing FizzBuzz programs declaratively. For example:

   1...150
   fizz=3
   buzz=5
   woof=7
produces the output of a much larger program in other languages. Imagine being able to pull this language during your next interview for a software dev position!

This language is really simple, which means that a compiler for it can be written in relatively few lines of code. However, it also requires generating code with a loop and if-statements, which requires thinking about branching patterns in assembly language.

The "Fastest FizzBuzz" article uses a Python parser generator to create its compiler. For my course, I want something that my students can read with only their knowledge coming into the course, and I want the program to be transparent enough so that they can see directly how each stage works and how it interacts with other parts of the compiler.

I was also itching to write a program, so I did.

I wrote my compiler in Python. It performs a simple scan of the source program, taking as much advantage of the small set of simple tokens as possible. The parser works by recursive descent, which also takes advantage of the language's simple structure. The type checker makes sure the numbers all make sense and that the words are unique. Finally, to make things even simpler, the code generator produces an executable Python 3.4 program.

I'm quite hopeful about this compiler's use as a class demo. It is simple enough to read in one sitting, even by students who enter the course with weaker programming skills. Even so, the language can also be used to demonstrate the more sophisticated techniques we learn throughout the course. Consider:

  • Adding comments to the source language overwhelms the ad hoc approach I use in the scanner, motivating the use of a state machine.
  • While the parser is easily handled by recursive descent, the language is quite amenable to a simple table-driven approach, too. The table-driven parser will be simple enough that students can get the hang of the technique with few unnecessary details.
  • The type checker demonstrates walking an abstract syntax tree without worrying about too many type rules. We can focus our attention on type systems when dealing with the more interesting source language of their project.
  • The code generator has to deal with flow of control, which enables us to learn assembly language generation on a smaller scale without fully implementing code to handle function calls.
So this compiler can be a demo in the first week of the course and also serve as a running example throughout.

We'll see how well this plays in class in a couple of months. In any case, I had great fun ending my days the last two weeks by firing up emacs or IDLE and writing code. As a bonus, I used this exercise to improve my git skills, taking them beyond the small set of commands I have used on my academic projects in the past. (git rebase -i is almost my friend now.) I also wrote more pyunit tests than I have written in a long, long time, which reminded me of some of the challenges students face when trying to test their code. That should serve me well in the fall, too.

I do like writing code.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 12, 2017 2:15 PM

Learn from the Bees

In The Sweet Bees [paywalled], Sue Hubbell writes:

Beekeepers are an opinionated lot, each sure that his methods, and his methods alone, are the proper ones. When I first began keeping bees, the diversity of passionately held opinion bewildered me, but now that I have hives in locations scattered over a thousand-square-mile area I think I understand it.... Frosts come earlier in some places than in others. Spring comes later. Rainfall is not the same. The soils, and the flowering plants they support, are unlike. Through the years, I have learned that as a result of all these variations I must keep the bees variously. Most people who keep bees have only a few hives, and have them all in one place. They find it difficult to understand why practices that have proved successful for them do not work for others. But I have learned that I must treat the bees in one yard quite differently from the way I do those even thirty miles away. The thing to do, I have discovered, is to learn from the bees themselves.

Even though I've taught at only two universities, I've learned this lesson over the years in many ways that don't require physical distance. Teaching novices in an intro course is different from teaching seniors. Teaching a programming course is different from teaching discrete structures or theory of computation. Teaching AI is different from teaching operating systems. I have learned that I must teach differently in different kinds of courses.

In an instructional setting, even more important are the bees themselves. I've been teaching Programming Languages every spring for the last years, and each group of students has been a little different. The course goes better when I have -- and take -- the time to make adjustments according to what I learn over the course of the semester about the particular set of students I have. This spring, I did not recognize the need to adapt quickly enough, and I feel like I let some of the students down.

You sometimes hear faculty talk about students "back in the old days". One thing is certain: the students we had then probably were different from the students we have now. But they were also different from the students that came before them. Each group is new, made up of individuals with their own backgrounds and their own goals.

It's nice when students are similar enough to what we expect that we can take advantage of what worked well last time. We just can't count on that happening all that often. Our job is to teach the students in class right now.

(I first encountered Hubbell's article in To Teach, by William Ayers. I gave a short review of it in yesterday's post.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 11, 2017 9:40 AM

Two True Sentences about Teaching Math

Math phobia is a common affliction of [K-12] teachers, and something that must be resisted if strong and able students are our goal.

Our own confusion about math can be an important aid in our teaching, if we take it seriously.

That second sentence applies to more than math, and more than K-12.

These come from To Teach, by William Ayers. This book is about teaching in K-12 schools, especially younger grades, with no particular focus on math or any other subject. These sentences come from a chapter on "liberating the curriculum", in which Ayers talks about specific issues in teaching reading, math, science, and social studies. Otherwise, the book is about ways of thinking about learning that respect the individual and break down artificial boundaries between areas of knowledge.

Teaching at a university is different, of course, because we are working with older, more mature students. They are more capable of directing their own learning and are, at least in my discipline, usually taking courses as part of a major they themselves have chosen. You would think that engagement and motivation would take on much different forms in the college setting.

However, I think that most of what Ayers says applies quite well to teaching college. This is especially true when students come to us hamstrung by a K-12 education such that they cannot, or at least do not as a matter of habit, take control of their learning. But I think his advice is true of good teaching anywhere, at least in spirit:

  • Get to know each student.
  • Create an environment that encourages and supports learning.
  • Build bridges to ideas across the discipline and to ideas in other disciplines.
  • Engage constantly with the question of what experiences will most help students on the path to wherever they are going.

One of my favorite lines from the book is You can learn everything from anything. Start with any topic in the CS curriculum, or any project that someone wants to build, and you will eventually touch every part of the field. I think we could do some interesting things with the CS curriculum by focusing courses on projects instead of topic areas. I love how Ayers suggests bringing this mindset even to kindergarten students.

Ayers's book is a thin volume, the sort I like, with good stories and thought-provoking claims about how we approach students and schools. Eugene sez: two thumbs up.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 08, 2017 12:10 PM

We Need a Course on Mundane Data Types

Earlier this month, James Iry tweeted:

Every CS degree covers fancy data structures. But what trips up more programmers? Times. Dates. Floats. Non-English text. Currencies.

I would like to add names to Iry's list. As a person who goes by his middle name and whose full legal name includes a suffix, I've seen my name mangled over the years in ways you might not imagine -- even by a couple of computing-related organizations that shall remain nameless. (Ha!) And my name presents only a scant few of the challenges available when we consider all the different naming conventions around the world.

This topic would make a great course for undergrads. We could call it "Humble Data Types" or "Mundane Data Types". My friends who program for a living know that these are practical data types, the ones that show up in almost all software and which consume an inordinate amount of time. That's why we see pages on the web about "falsehoods programmers believe" about time, names, and addresses -- another one for our list!

It might be hard to sell this course to faculty. They are notoriously reluctant to add new courses to the curriculum. (What would it displace?) Such conservatism is well-founded in a discipline that moves quickly through ideas, but this is a topic that has been vexing programmers for decades.

It would also be hard to sell the course to students, because it looks a little, well, mundane. I do recall a May term class a few years ago in which a couple of programmers spent days fighting with dates and times in Ruby while building a small accounting system. That certainly created an itch, but I'm not sure most students have enough experience with such practical problems before they graduate.

Maybe we could offer the course as continuing education for programmers out in the field. They are the ones who would appreciate it the most.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 31, 2017 2:28 PM

Porting Programs, Refactoring, and Language Translation

In his commonplace book A Certain World, W.H. Auden quotes C.S. Lewis on the controversial nature of tramslation:

[T]ranslation, by its very nature, is a continuous implicit commentary. It can become less tendentious only by becoming less of a translation.

Lewis was merely acknowledging a truth about language: Translators must have a point of view, and often that point of view will be controversial.

I once saw Kurt Vonnegut speak with a foreign language class here many years ago. One of the students asked him what he thought about the quality of the translations done for his book. Vonnegut laughed and said that his books were so peculiar and so steeped in Americana that translating one was akin to writing a new book. He said that his translators deserved all the royalties from the books they created by translating him. They had to write brand new works.

These memories came to mind again recently while I was reading Tyler Cowen's conversation with Jhumpa Lahiri, especially when Lahiri said this:

At one point I was talking about this idea, in antiquity: in Latin, the word for "translator" is "interpreter". I teach translation now, and I talk a lot to my students about translation being the most intimate form of reading and how there was the time when translating and interpreting and analyzing were all one thing.

As my mind usually does, it began to think about computer programs.

Like many programmers, I often find myself porting a program from one language to another. This is clearly translation but, as Vonnegut and and Lahiri tell us, it is also a form of interpretation. To port a piece of code, I have to understand its meaning and express that meaning in a new language. That language has its own constructs, idioms, patterns, and set of community practices and expectations. To port a program, one must have a point of view, so the process can be, to use Lewis's word, tendentious.

I often refactor code, too, both my own programs and programs written by others. This, too, is a form of translation, even though it leaves the new code written in the same language as the original. Refactoring is necessarily an opinionated act, and thus tendentious.

Occasionally, I refactor a program in order to learn what it does and how it does it. In those cases, I'm not judging the original code as anything but ill-suited to my current state of knowledge. Even so, when I get done, I usually like my version better, if only a little bit. It expresses what I learned in the process of rewriting the code.

It has always been hard for me to port a program without refactoring it, and now I understand why. Both activities are a kind of translation, and translation is by its nature an activity that requires a point of view.

This fall, I will again teach our "Translation of Programming Languages" course. Writing a compiler requires one to become intimate not only with specific programs, the behavior of which the compiler must preserve, but also the language itself. At the end of the project, my students know the grammar, syntax, and semantics of our source language in a close, almost personal way. The target language, too. I don't mind if my students develop a strong point of view, even a controversial one, along the way. (I'm actually disappointed if the stronger students do not!) That's a part of writing new software, too.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Software Development, Teaching and Learning

May 30, 2017 4:28 PM

Learning by Guided Struggle

A few years ago, someone asked MathOverflow, "Why do so many textbooks have so much technical detail and so little enlightenment?" Some CS textbooks suffer from this problem, too, though these days the more likely case is that the book tries to provide too much context. They try to motivate so much that they drown the reader in a lot of unhelpful words.

The top answer on MathOverflow (via Brian Marick) points out that the real problem does not usually lie in the lack of motivation or context provided by textbooks. The real goal is to learn how to do math, not "know" it. That is even more true of software developent. A textbook can't really teach to write programs; most of that is learned through doing itself. Perhaps the best purpose that the text can serve, says the answer, is to show the reader what he or she needs to learn. From there, the reader must go off and practice.

How does learning occur from there?

Based on my own experience as both a student and a teacher, I have come to the conclusion that the best way to learn is through "guided struggle". You have to do the work yourself, but you need someone else there to either help you over obstacles you can't get around despite a lot of effort or provide you with some critical knowledge (usually the right perspective but sometimes a clever trick) you are missing. Without the prior effort by the student, the knowledge supplied by a teacher has much less impact.

Some college CS students seem to understand this, or perhaps they simply get lucky because they are motivated to program for some other reason. They go off and try to do something using the knowledge they have. Then they come to class, or to the prof's office hours, to ask questions about what does not go smoothly. Students who skip the practice and hope that lectures will soak into them like a magic balm generally find that they don't know much when they attempt problems after class.

The MathOverflow answer matches up pretty well with my experience. Teachers and written material can have strong positive effect on learning, but they are most effective once the student has engaged with the material by trying to solve problems or write code. The teacher's job then has two parts: First, create conditions in which students can work productively. Second, pay close attention to what students are doing, diagnose misconceptions and mistakes, and help students get back onto a productive path by pointing out missing practices or bits of knowledge.

All of this reminds me of some of mymore effective class sessions teaching novice programmers, using design patterns. A typical session looks something like this:

  • I give the students a problem to solve.
  • Students work on a solution, using techniques that have worked in the past.
  • They encounter problems, because the context around the problem has shifted in ways that they can't see given only what they know.
  • We discuss the forces at play and tease out the underlying problem.
  • I demonstrate the pattern's solution.
  • Ahhhh.

This is a guided struggle in the small. Students then go off to write a larger program that lets them struggle a bit more, and we discuss whatever gets in their way.

A final note... One of the comments on the answer points out that a good lecture can "do" math (or CS), rather than "tell", and that such lectures can be quite effective. I agree, but in my experience this is one of the hardest skills for a professor to develop. Once I have solved a problem, it is quite difficult for me to make it look to my students as if I am solving it anew in class. The more ingrained the skill, the harder it is for me to lecture about it in a way that is more helpful than telling a story. Such stories are an essential tool in the teacher's toolbox, but their lies more in motivating students than in teaching them how to write programs. Students still have to go off and do the hard work themselves. The teacher's job is to guide them through their struggles.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 04, 2017 4:07 PM

Tweak

In an essay in The Guardian, writer George Saunders reflects on having written his first novel after many years writing shorter fiction. To a first approximation, he found the two experiences to be quite similar. In particular,

What does an artist do, mostly? She tweaks that which she's already done.

I read this on a day when I had just graded thirty-plus programming assignments from my junior/senior-level Programming Languages courses, and this made me think of student programmers. My first thought was snarky, and only partly tongue-in-cheek: Students write and then submit. Who has the time or interest to tweak?

My conscience quickly got the better of me, and I admitted that this was unfair. In a weak moment at the end of a long day, it's easy to be dismissive and not think about students as people who face all sorts of pressures both in and out of the classroom. Never forget Hanlon's Razor, my favorite formulation of which is:

Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system of interactions.

Even allowing the snark, my first thought was inaccurate. The code students submit is often the end result of laborious tweaking. The thing is, most students tweak only while the code gives incorrect answers. In the worst case, some students tweak and tweak, employing an age-old but highly inefficient software development methodology: Make another change and see if it works.

This realization brought to mind Kent Beck's Rules of Simple Design:

  1. passes the tests
  2. reveals intention
  3. has no duplication
  4. has the fewest elements possible

Most students are under time pressures that make artistry a luxury good; they are happy to find time to make their code work at all. If the code passes the tests, it's probably good enough for now.

But there is more to the student's willingness to stop tinkering so soon than just external pressures. It takes a lot of programming experience and a fair amount of time to come to even appreciate Rules 2 through 4. Why does it matter if code reveals the programmer's intention, in terms of either art or engineering? What's the big deal about a little duplication? The fewest elements? -- making that happen takes time that could be spent on something much more interesting.

I am coming to think of Kent's rules as a sort of Bloom's taxonomy for the development of programming expertise. Students start at Level 1, happy to write code that achieves its stated purpose. As they grow, programmers move through the rules, mastering deeper levels of understanding of design, simplicity, and, yes, artistry. They don't move through the stages in a purely linear fashion, but they do tend to master the rules in roughly the order listed above.

Today is a day of empathy for my novice programmers. As I write this, they are writing the final exam in my course. I hope that in a few weeks, after the blur of a busy semester settles in their minds, they reflect a bit and see that they have made progress as programmers -- and that they can now ask better questions about the programming languages they use than they could at the beginning of the course.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 07, 2017 1:33 PM

Two Very Different Kinds of Student

The last sentence of each of these passages reminds me of some of the students over the years.

First this, from Paul Callaghan's The Word Chain Kata:

One common way to split problems is via the "generate and test" pattern, where one component suggests results and the other one discards invalid ones. (In my experience, undergrad programmers are experts in this pattern, although they tend to use it as a [software development] methodology--but that's another story.)

When some students learn to program for the first time, they start out by producing code that looks like something they have seen before, trying it out, and then tinkering with it until it works or until they become exasperated. (I always hope that they act on their exasperation by asking me for help, rather than by giving up.) These students usually understand little bits of the code locally, but they don't really understand the program or function as a whole. Yet, somehow, they find a way to make it work.

It's surprising how far some students can get in a course of study by programming this way. (That's why Callaghan calling the approach a methodology made me smile.) It's unfortunate, too, because eventually the approach hits a wall when problems and domains become more challenging. Or when they run into a course where they program in Racket, in which one misplaced parenthesis can cause an otherwise perfect piece of code to implode. Lisp-like languages do not provide a supportive environment for this kind of "programming by wandering around".

And then there's this, from Andy Hertzfeld's fun little story about the writing of the classic manual Inside Macintosh:

Pretty soon, I figured out that if Caroline had trouble understanding something, it probably meant that the design was flawed. On a number of occasions, I told her to come back tomorrow after she asked a penetrating question, and revised the API to fix the flaw that she had pointed out. I began to imagine her questions when I was coding something new, which made me work harder to get things clearer before I went over them with her.

In this story, Caroline is not a student, but a young new writer assigned to the Mac documentation team. Still, she reminds me of students who are as delightful to work with as generate-and-test programmers can be frustrating. These students pay attention. They ask good questions, ones that often challenge the unstated assumptions underlying what we have explained before. At first, this can seem frustrating to us teachers, because we have to formulate answers for things that should be obvious. But that's the point: they aren't obvious, at least not to everyone, and us thinking they are obvious is inhibiting our teaching.

Last semester, I had one team of students in my compilers class that embodied this attitude. They asked questions no one had ever bothered to ask me before. At first, I thought, "How can these guys not understand such basic material?" Like Hertzfeld, though, pretty soon I figured out that their questions were exposing holes or weaknesses in my lectures, examples, and sample code. I began to anticipate their questions as I prepared for class. Their questions helped me see ways to make my course better.

As along so many other dimensions, part of the challenge in teaching CS is the wide variation in the way students study, learn, and approach their courses. It is also a part of the great fun of teaching, especially when you encounter the Carolines and Averys who push me to get better.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 02, 2017 12:02 PM

Reading an Interview with John McPhee Again, for the First Time

"This weekend I enjoyed Peter Hessler's interview of McPhee in The Paris Review, John McPhee, The Art of Nonfiction No. 3."

That's a direct quote from this blog. Don't remember it? I don't blame you; neither do I. I do remember blogging about McPhee back when, but as I read the same Paris Review piece again last Sunday and this, I had no recollection of reading it before, no sense of déjà vu at all.

Sometimes having a memory like mine is a blessing: I occasionally get to read something for the first time again. If you read my blog, then you get to read my first impressions for a second time.

I like this story that McPhee told about Bob Bingham, his editor at The New Yorker:

Bingham had been a writer-reporter at The Reporter magazine. So he comes to work at The New Yorker, to be a fact editor. Within the first two years there, he goes out to lunch with his old high-school friend Gore Vidal. And Gore says, What are you doing as an editor, Bobby? What happened to Bob Bingham the writer? And Bingham says, Well, I decided that I would rather be a first-rate editor than a second-rate writer. And Gore Vidal draws himself up and says, And what is wrong with a second-rate writer?

I can just hear the faux indignation in Vidal's voice.

McPhee talked a bit about his struggle over several years to write a series of books on geology, which had grown out of an idea for a one-shot "Talk of the Town" entry. The interviewer asked him if he ever thought about abandoning the topic and moving on to something he might enjoy more. McPhee said:

The funny thing is that you get to a certain point and you can't quit. Because I always worried: if you quit, you'll quit again. The only way out was to go forward, to learn your way and write your way out of it.

I know that feeling. Sometimes, I really do need to quit something and move on, but I always wonder whether quitting this time will make it easier to do next time. Because sometimes, I need to stick it out and, as McPhee says, learn my way out of the difficulty. I have no easy answers for knowing when quitting is the right thing to do.

Toward the end of the interview, the conversation turned to the course McPhee teaches at Princeton, once called "the literature of fact". The university first asked him to teach on short notice, over the Christmas break in 1974, and he accepted immediately. Not everyone thought it was a good idea:

One of my dear friends, an English teacher at Deerfield, told me: Do not do this. He said, Teachers are a dime a dozen -- writers aren't. But my guess is that I've been more productive as a writer since I started teaching than I would have been if I hadn't taught. In the overall crop rotation, it's a complementary job: I'm looking at other people's writing, and the pressure's not on me to do it myself. But then I go back quite fresh.

I know a lot of academics who feel this way. Then again, it's a lot easier to stay fresh in one's creative work if one has McPhee's teaching schedule, rather than a full load of courses:

My schedule is that I teach six months out of thirty-six, and good Lord, that leaves a lot of time for writing, right?

Indeed it does. Indeed it does.

On this reading of the interview, I marked only two passages that I wrote about last time. One came soon after the above response, on how interacting with students is its own reward. The other was a great line about the difference between mastering technique and having something to say: You demonstrated you know how to saddle a horse. Now go find the horse.

That said, I unconsciously channeled this line from McPhee just yesterday:

Writing teaches writing.

We had a recruitment event on campus, and I was meeting with a dozen or so prospective students and their entourages. We were talking about our curriculum, and I said a few words about our senior project courses. Students generally like these courses, even though they find them difficult. The students have never had to write a big program over the course of several months, and it's harder than it looks. The people who hire our graduates like these courses, too, because they know that these courses are places where students really begin to learn to program.

In the course of my remarks, I said something to the effect, "You can learn a lot about programming in classes where you study languages and techniques and theory, but ultimately you learn to write software by writing software. That's what the project courses are all about." There were a couple of experienced programmers in the audience, and they were all nodding their heads. They know McPhee is right.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

March 29, 2017 4:16 PM

Working Through A Problem Manually

This week, I have been enjoying Eli Bendersky's two-article series "Adventures in JIT Compilation":

Next I'll follow his suggestion and read the shorter How to JIT - An Introduction.

Bendersky is a good teacher, at least in the written form, and I am picking up a lot of ideas for my courses in programming languages and compilers. I recommend his articles and his code highly.

In Part 2, Bendersky says something that made me think of my students:

One of my guiding principles through the field of programming is that before diving into the possible solutions for a problem (for example, some library for doing X) it's worth working through the problem manually first (doing X by hand, without libraries). Grinding your teeth over issues for a while is the best way to appreciate what the shrinkwrapped solution/library does for you.

The presence or absence of this attitude is one of the crucial separators among CS students. Some students come into the program with this mindset already in place, and they are often the ones who advance most quickly in the early courses. Other students don't have this mindset, either by interest or by temperament. They prefer to solve problems quickly using canned libraries and simple patterns. These students are often quite productive, but they sometimes soon hit a wall in their learning. When a student rides along the surface of what they are told in class, never digging deeper, they tend to have a shallow knowledge of how things work in their own programs. Again, this can lead to a high level of productivity, but it also produces brittle knowledge. When something changes, or the material gets more difficult, they begin to struggle. A few of the students eventually develop new habits and move nicely into the group of students who likes to grind. The ones who don't make the transition continue to struggle and begin to enjoy their courses less.

There is a rather wide variation among undergrad CS students, both in their goals and in their preferred styles or working and learning. This variation is one of the challenges facing profs who hope to reaching the full spectrum of students in their classes. And helping students to develop new attitudes toward learning and doing is always a challenge.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 10, 2017 2:51 PM

Reading Is A Profoundly Creative Act

This comes from Laura Miller, a book reviewers and essayist for Slate, in a Poets & Writers interview:

I also believe that reading is a profoundly creative act, that every act of reading is a collaboration between author and reader. I don't understand why more people aren't interested in this alchemy. It's such an act of grace to give someone else ten or fifteen hours out of your own irreplaceable life, and allow their voice, thoughts, and imaginings into your head.

I think this is true of all reading, whether fiction or nonfiction, literary or technical. I often hear CS profs tell their students to read "actively" by trying code out in an interpreter, asking continually what the author means, and otherwise engaging with the material. Students who do have a chance to experience what Miller describes: turning over a few hours of their irreplaceable lives to someone who understands a topic well, allow their voice, thoughts, and imaginings into their heads, and coming out on the other end of the experience with new thoughts -- and maybe even a new mind.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

March 05, 2017 10:53 AM

"Flying by Instruments" and Learning a New Programming Style

Yesterday afternoon I listened to a story told by someone who had recently been a passenger in a small plane flown by a colleague. As they climbed to their cruising altitude, which was clear and beautiful, the plane passed through a couple thousand feet of heavy clouds. The pilot flew confidently, having both training and experience in instrument flight.

The passenger telling the story, however, was disoriented and a little scared by being in the clouds and not knowing where they were heading. The pilot kept him calm by explaining the process of "flying by instruments" and how he had learned it. Sometimes, learning something new can give us confidence. Other times, just listening to a story can distract us enough to get us through a period of fear.

This story reminded me of a session early in my programming languages course, when students are learning Racket and functional programming style. Racket is quite different from any other language they have learned. "Don't dip your toes in the water," I tell them. "Get wet."

For students who prefer their learning analogies not to involve potential drowning -- that is, after all, the sensation many of them report feeling as they learn to cope with all of Racket's parentheses for the first time -- I relate an Alan Kay story that talks about learning to fly an airplanes after already knowing how to drive a car. Imagine what the world be like if everyone refused to learn how to fly a plane because driving was so much more comfortable and didn't force them to bend their minds a bit? Sure, cars are great and serve us well, but planes completely change the world by bringing us all closer together.

I have lost track of where I had heard or read Kay telling that story, so when I wrote up the class notes, I went looking for a URL to cite. I never found one, but while searching I ran across a different use of airplanes in an analogy that I have since worked into my class. Here's the paragraph I use in my class notes, the paragraph I thought of while listening to the flying-by-instruments story yesterday:

The truth is that bicycles and motorcycles operate quite differently than wheeled vehicles that keep three or more wheels on the ground. For one thing, you steer by leaning, not with the handlebars or steering wheel. Learning to fly an airplane gives even stronger examples of having to learn that your instincts are wrong, and that you have to train yourself to "instinctively" know not only that you turn by banking rather than with the rudder, but that you control altitude primarily with the throttle, not the elevators, speed primarily with the elevators not the throttle, and so forth.

Learning to program in a new style, whether object-oriented, functional, or concatenative, usually requires us to overcome deeply-ingrained design instincts and to develop new instincts that feel uncomfortable while we are learning. Developing new instincts takes some getting used to, but it's worth the effort, even if we choose not to program much in the new style after we learn it.

Now I find myself thinking about what it means to "fly by instruments" when we program. Is our unit testing framework one of the instruments we come to rely on? What about a code smell detector such as Reek? If you have thoughts on this idea, or pointers to what others have already written, I would love to hear from you.

Postscript.   I originally found the passage quoted above in a long-ish article about Ruby and Smalltalk, but that link has been dead for a while. I see that the article was reposted in a Ruby Buzz Forum message called On Ceremony and Training Wheels.

Oh, and if you know where I can find the Alan Kay story I went looking for online, I will greatly appreciate any pointers you can offer!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 28, 2017 3:47 PM

Comments on Comments

Over the last couple of days, a thread on the SIGCSE mailing list has been revisiting the well-tilled ground of comments in code. As I told my students at the beginning of class this semester, some of my colleagues consider me a heretic for not preaching the Gospel of Comments in Code. Every context seems to have its own need for, and expectations of, comments. Wherever students end up working, both while in school and after graduation, their employers will set a standard and expect them to follow. They will figure it out.

In most of my courses, I define a few minimal standard, try to set a good example in the code I give my students, and otherwise don't worry much about comments. Part of my example is that different files I give them are commented differently, depending on the context. A demo of an idea, a library to be reused in the course, and an application are different enough that they call for different kinds of comments. In a course such as compiler development, I require more documentation, both in and out of the code. Students live with that code for a semester and come to value some of their own comments many weeks later.

Anyway, the SIGCSE thread included two ideas that I liked, though they came from competing sides of the argument. One asserted that comments are harmful, because:

They're something the human reader can see but the computer can't, and therefore are a source of misunderstanding.

I love the idea of thinking in terms of misunderstandings between humans and the computer.

The other responded to another poster's suggestion that students be encouraged to write comments with themselves in mind: What would you like to know if you open this code six months from now? The respondent pointed out that this is unreasonable: Answering that question requires...

... a skill that is at least on par with good programming skills. Certainly new CS students are unable to make this kind of decision.

The thread has been a lot of fun to read. I remain mostly of the view that:

  • It's better to write code that says what it means and thus needs as few comments as possible. This is a skill students can and should work on all the time time.

  • If the code is likely to conflict with the expectations of the people most likely to read your code, then add a comment. This part depends a lot on context and experience. Students are just now earning their experience, and the context in which they work changes from course to course and job to job.
Students who care about programming, or who come to care about it over time, will care (or come to care) about comments, too.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 23, 2017 4:08 PM

Surrounding Yourself with Beginners

This comment at the close of a recent Dan Meyer post struck close to home:

I haven't found a way to generate these kinds of insights about math without surrounding myself with people learning math for the first time.

I've learned a lot about programming from teaching college students. Insights can come at all levels, from working with seniors who are writing compilers as their first big project, through midstream students who are learning OOP or functional programming as a second or third style, right down to beginners who are seeing variables and loops and functions for the first time.

Sometimes an insight comes when a student asks a new question, or an old question at the right time. I had a couple of students in my compiler course last fall who occasionally asked the most basic questions, especially about code generation. Listening to their questions and creating new examples to illustrate my answers helped me think differently about the run-time system.

Other times, they come while listening to students talk among themselves. One student's answer to another student's question can trigger an entirely new way for me to think about a concept I think I understand pretty well. I don't have any recent personal examples in mind, but this sort of experience seems to be part of what triggered Meyer's post.

People are always telling us to "be the least experienced person in the room", to surround ourselves with "smarter" or more experienced people and learn from them. But there is a lot to be learned from programmers who are just starting out. College profs have that opportunity all the time, if they are willing to listen and learn.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 10, 2017 3:55 PM

Follow-Up on Learning by Doing and Ubiquitous Information

A few quick notes on my previous post about the effect of ubiquitous information on knowing and doing.

~~~~

The post reminded a reader of something that Guy Steele said at DanFest, a 2004 festschrift in honor of Daniel Friedman's 60th birthday. As part of his keynote address, Steele read from an email message he wrote in 1978:

Sussman did me a very big favor yesterday -- he let me flounder around trying to build a certain LISP interpreter, and when I had made and fixed a critical bug he then told me that he had made (and fixed) the same mistake in Conniver. I learned a lot from that bug.

Isn't that marvelous? "I learned a lot from that bug."

Thanks to this reader for pointing me to a video of Steele's DanFest talk. You can watch this specific passage at the 12:08 mark, but really: You now have a link to an hour-long talk by Guy Steele that is titled "Dan Friedman--Cool Ideas". Watch the entire thing!

~~~~

If all you care about is doing -- getting something done -- then ubiquitous information is an amazing asset. I use Google and StackOverflow answers quite a bit myself, mostly to navigate the edges of languages that I don't use all the time. Without these resources, I would be less productive.

~~~~

Long-time readers may have read the story about how I almost named this blog something else. ("The Euphio Question" still sets my heart aflutter.) Ultimately I chose a title that emphasized the two sides of what I do as both a programmer and a teacher. The intersection of knowing and doing is where learning takes place. Separating knowing from doing creates problems.

In a post late last year, I riffed on some ideas I had as I read Learn by Painting, a New Yorker article about an experiment in university education in which everyone made art as a part of their studies.

That article included a line that expressed an interesting take on my blog's title: "Knowing and doing are two sides of the same activity, which is adapting to our environment."

That's cool thought, but a rather pedestrian sentence. The article includes another, more poetic line that fits in nicely with the theme of the last couple of days:

Knowing is better than not knowing, but knowing without doing is as good as not knowing.

If I ever adopt a new tagline for my blog, it may well be this sentence. It is not strictly true, at least in a universal sense, but it's solid advice nonetheless.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

February 09, 2017 4:25 PM

Knowing, Doing, and Ubiquitous Information

I was recently reading an old bit-player entry on computing number factoids when I ran across a paragraph that expresses an all-too-common phenomenon of the modern world:

If I had persisted in my wrestling match, would I have ultimately prevailed? I'll never know, because in this era of Google and MathOverflow and StackExchange, a spoiler lurks around every cybercorner. Before I could make any further progress, I stumbled upon pointers to the work of Ira Gessel of Brandeis, who neatly settled the matter ... more than 40 years ago, when he was an undergraduate at Harvard.

The matter in this case was recognizing whether an arbitrary n is a Fibonacci number or not, but it could be have been just about anything. If you need an answer to almost any question these days, it's already out there, right a your fingertips.

Google and StackExchange and MathOverflow are a boon for knowing, but not so much for doing. Unfortunately, doing often leads to a better kind of knowing. Jumping directly to the solution can rob us of some important learning. As Hayes reminds us in his articles, it also can also deprive us of a lot of fun.

You can still learn by doing and have a lot of fun doing it today -- if you can resist the temptation to search. After you struggle for a while and need some help, then having answers at our fingertips becomes a truly magnificent resource and can help us get over humps we could never have gotten over so quickly in even the not-the-so-distant past.

The new world puts a premium on curiosity, the desire to find answers for ourselves. It also values self-denial, the ability to delay gratification while working hard to find answer that we might be able to look up. I fear that this creates a new gap for us to worry about in our education systems. Students who are curious and capable of self-denial are a new kind of "haves". They have always had a leg up in schools, but ubiquitous information magnifies the gap.

Being curious, asking questions, and wanting to create (not just look up) answers have never been more important to learning.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 28, 2017 8:10 AM

Curiosity on the Chessboard

I found a great story from Lubomir Kavalek in his recent column, Chess Champions and Their Queens. Many years ago, Kavalek was talking with Berry Withuis, a Dutch journalist, about Rashid Nezhmedtinov, who had played two brilliant queen sacrifices in the span of five years. The conversation reminded Withuis of a question he once asked of grandmaster David Bronstein:

"Is the Queen stronger than two light pieces?"

(The bishop and knight are minor, or "light", pieces.)

The former challenger for the world title took the question seriously. "I don't know," he said. "But I will tell you later."

That evening Bronstein played a simultaneous exhibition in Amsterdam and whenever he could, he sacrificed his Queen for two minor pieces. "Now I know," he told Withuis afterwards. "The Queen is stronger."

How is that for an empirical mind? Most chessplayers would have immediately answered "yes" to Withuis's question. But Bronstein -- one of the greatest players never to be world champion and author of perhaps the best book of tournament analysis in history -- didn't know for sure. So he ran an experiment!

We should all be so curious. And humble.

I wondered for a while if Bronstein could have improved his experiment by channeling Kent Beck's Three Bears pattern. (I'm a big fan of this learning technique and mention it occasionally here, most recently last summer.) This would require him to play many games from the other side of the sacrifice as well, with a queen against his opponents' two minor pieces. Then I realized that he would have a hard time convincing any of his opponents to sacrifice their queens so readily! This may be the sort of experiment that you can only conduct from one side, though in the era of chess computers we could perhaps find, or configure, willing collaborators.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

January 22, 2017 9:26 AM

To Teach is to Counsel Possibility and Patience

As I settle into a new semester of teaching students functional programming and programming languages, I find myself again in the role of grader of, and commenter, on code. This passage from Tobias Wolff in Paris Review interview serves as a guide for me:

Now, did [Pound] teach Eliot to write? No. But he did help him see that there were more notes to be played he was playing. That is the kind of thing I hope to do. And to counsel patience -- the beauty of patience, which is not a virtue of the young.

Students often think that learning to program is all about the correctness of their code. Correctness matters, but there's a lot more. Knowing what is possible and learning to be patient as they learn often matter more than mere correctness. For some students, it seems, those lessons must begin before more technical habits can take hold.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 20, 2017 11:38 AM

Build the Bridge

On the racket-users mailing list yesterday, Matthias Felleisen issued "a research challenge that is common in the Racket world":

If you are here and you see the blueprints for paradise over there, don't just build paradise. Also build the bridge from here to there.

This is one of the things I love about Racket. And I don't use even 1% of the goodness that is Racket and its ecosystem.

Over the last couple of years, I have been migrating my Programming Languages course from a Scheme subset of Racket to Racket itself. Sometimes, this is simply a matter of talking about Racket, not Scheme. Others, it means using some of the data structures, functions, and tools Racket provides rather than doing without or building our own. Occasionally, this shift requires changing something I do in class, because Racket is fussier than Scheme in some regards. That's usually a good thing, because the change makes Racket a better language for engineering big programs. In general, though, the shift goes smoothly.

Occasionally, the only challenge is a personal one. For example, I decided to use first and rest this semester when working with lists, instead of car and cdr. This should make some students' lives better. Learning a new language and a new style and new IDE all at once can be tough for students with only a couple of semesters' programming experience, and using words that mean what they say eliminates one unnecessary barrier. But, as I tweeted, I don't feel whole or entirely clean when I do so. As my college humanities prof taught me through Greek tragedies, old habits die hard, if at all.

One of my goals for the course this semester is to have the course serve as a better introduction to Racket for students who might be inclined to take advantage of its utility and power in later courses, or who simply want to enjoy working in a beautiful language. I always seem to have a few who do, but it might be nice if even more left the course thinking of Racket as a real alternative for their project work. We'll see how it goes.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 11, 2017 2:22 PM

An Undergraduate Research Project for Spring

Coming into the semester, I didn't have any students doing their undergraduate research under my supervision. That frees up some time each week, which is nice, but leaves my semester a bit poorer. Working with students one-on-one is one of the best parts of this job, even more so in relief against administrative duties. Working on these projects makes my weeks better, even when I don't have as much time to devote to them as I'd like.

Yesterday, a student walked in with a project that makes my semester a little busier -- and much more interesting. Last summer, he implemented some ideas on extensible effects in Haskell and has some ideas for ways to make the system more efficient.

This student knows a lot more about extensible effects and Haskell than I do, so I have some work to do just to get ready to help. I'll start with Extensible Effects: An Alternative to Monad Transformers, the paper by Oleg Kiselyov and his colleagues that introduced the idea to the wider computing community. This paper builds on work by Cartwright and Felleisen, published over twenty years ago, which I'll probably look at, too. The student has a couple of other things for me to read, which will appear in his more formal proposal this week. I expect that these papers will make my brain hurt, in the good way, and am looking forward to diving in.

In the big picture, most undergrad projects in my department are pretty minor as research goes. They are typically more D than R, with students developing something that goes beyond what they learn in any course and doing a little empirical analysis. The extensible effects project is much more ambitious. It builds on serious academic research. It works on a significant problem and proposes something new. That makes the project much more exciting for me as the supervisor.

I hope to report more later, as the semester goes on.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 08, 2017 9:48 AM

Finding a Balance Between Teaching and Doing

In the Paris Review's The Art of Fiction No. 183, the interviewer asks Tobias Wolff how he balances writing with university teaching. Wolff figures that teaching is a pretty good deal:

When I think about the kinds of jobs I've had and the ways I've lived, and still managed to get work done--my God, teaching in a university looks like easy street. I like talking about books, and I like encountering other smart, passionate readers, and feeling the friction of their thoughts against mine. Teaching forces me to articulate what otherwise would remain inchoate in my thoughts about what I read. I find that valuable, to bring things to a boil.

That reflects how I feel, too, as someone who loves to do computer science and write programs. As a teacher, I get to talk about cool ideas every day with my students, to share what I learn as I write software, and to learn from them as they ask the questions I've stopped asking myself. And they pay me. It's a great deal, perhaps the optimal point in the sort of balance that Derek Sivers recommends.

Wolff immediately followed those sentences with a caution that also strikes close to home:

But if I teach too much it begins to weigh on me--I lose my work. I can't afford to do that anymore, so I keep a fairly light teaching schedule.

One has to balance creative work with the other parts of life that feed the work. Professors at research universities, such as Wolff at Stanford, have different points of equilibrium available to them than profs at teaching universities, where course loads are heavier and usually harder to reduce.

I only teach one course a semester, which really does help me to focus creative energies around a smaller set of ideas than a heavier load does. Of course, I also have the administrative duties of a department head. They suffocate time and energy in a much less productive way than teaching does. (That's the subject of another post.)

Why can't Wolff afford to teach too many courses anymore? I suspect the answer is time. When you reach a certain age, you realize that time is no longer an ally. There are only so many years left, and Wolff probably feels the need to write more urgently. This sensation has been seeping into my mind lately, too, though I fear perhaps a bit too slowly.

~~~~

(I previously quoted Wolff from the same interview in a recent entry about writers who give advice that reminds us that there is no right way to write all programs. A lot of readers seemed to like that one.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

December 30, 2016 12:42 PM

Looking Closer at My Course is Hard -- and Helpful

The CS faculty has decided to create common syllabi for all courses in the department. The motivation to do this came from several directions, including a need to meet university requirements regarding "outcomes assessment" and a desire to make sections taught by different instructors more consistent within and across semesters. But it will also help instructors improve their courses. Faculty teaching upper-division courses will have a more better sense of what students coming into their courses should already know, and faculty teaching lower-division courses will have a better sense of what students their students need to be able to do in their upcoming courses.

Our first pass at this is for each faculty member to use a common format to describe one of his or her courses. The common format requires us to define in some detail the purpose, goals, outcomes, and content of the course. Of course, we all have syllabi for our courses that cover some or all of these things, but now we are expected to make all the elements concrete enough that the syllabus can be used by other faculty to teach the course.

For my first attempt, I decided to write an extended syllabus for my Programming Languages and Paradigm course, which I am teaching again in the spring. I have been teaching this course for years, have detailed lecture notes for every session (plus many more), and already give students a syllabus that tries to explain in a useful way the purpose, goals, outcomes, and content of the course. That should give me a running start, right?

I've been working on putting my current syllabus's content into the extended syllabus format for several hours now. At this point, I have concluded that the process is three things: instructive, likely to be very helpful in the long run, and very hard to do.

Defining detailed goals and outcomes is instructive because it causes me to think about the course both at the highest level of detail (the goal of the course both in our curriculum and in our students' CS education) and at the lowest (what we want students our students to know and be able to do to when they leave the course). After teaching the course for many years, I tend to think big-picture thoughts about the course only at the beginning of the semester and only in an effort make general modifications to the direction it takes. Then I think about finer details on a unit-by-unit and session-by-session basis, slowly evolving the course content in reaction to specific stimuli. Looking at the course across multiple levels of abstraction at the same time is teaching me a lot about what my course is and does in a way that I don't usually see when I'm planning only at the high level or executing only at the day-to-day level.

One specific lesson I've learned is really a stark reminder of something I've always known: some of my sessions are chock-full of stuff: ideas, techniques, code, examples, .... That is great for exposing students to a wide swath of the programming languages world, but it is also a recipe for cognitive overload.

This process is helpful because it causes me think about concrete ways I can make the course better. I am finding holes in my coverage of certain topics and leaps from one concept to another that are intuitive in my mind but not documented anywhere.

I've been taking notes as I go long detailing specific changes I can make this spring:

  • to session materials, so that they give more examples of new concepts before I ask students to use the concepts in their work
  • to homework assignments, so that they emphasize specific goals of the course more clearly and to cover goals that seem to have lost coverage over time
  • to exams, so that they assess the outcomes we hope to achieve in the course
Designing class sessions, homework, and exams in terms of goals and outcomes creates a virtuous cycle in which the different elements of the course build on and reinforce one another. This is perhaps obvious to all you teachers out there, as it is to me, but it's easy to lose sight of over time.

But my most salient conclusion at this moment is that this is hard. It is difficult to explain the course in enough detail that faculty outside the area can grok the course as a part of our curriculum. It's difficult to explain the course in enough detail that other faculty could, at least in principle, teach it as designed. It's difficult to design a course carefully enough to be justifiably confident that it meets your goals for the course. That sounds a little like programming.

But I'm glad I'm doing it. It's worth the effort to design a course this carefully, and to re-visit the design periodically. That sounds a little like programming, too.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 26, 2016 8:38 AM

Learn By Programming

The latest edition of my compiler course has wrapped, with grades submitted and now a few days distance between us and the work. The course was successful in many ways, even though not all of the teams were all able to implement the entire compiler. That mutes the students' sense of accomplishment sometimes, but it's not unusual for at least some of the teams to have trouble implementing a complete code generator. A compiler is a big project. Fifteen weeks is not a lot of time. In that time, students learn a lot about compilers, and also about how to work as a team to build a big program using some of the tools of modern software development. In general, I was quite proud of the students' efforts and progress. I hope they were proud of themselves.

One of the meta-lessons students tend to learn in this course is one of the big lessons of any project-centered course:

... making something is a different learning experience from remembering something.

I think that a course like this one also helps most of them learn something else even more personal:

... the discipline in art-making is exercised from within rather than without. You quickly realize that it's your own laziness, ignorance, and sloppiness, not somebody else's bad advice, that are getting in your way. No one can write your [program] for you. You have to figure out a way to write it yourself. You have to make a something where there was a nothing.

"Laziness", "ignorance", and "sloppiness" seem like harsh words, but really they aren't. They are simply labels for weaknesses that almost all of us face when we first learn to create things on our own. Anyone who has written a big program has probably encountered them in some form.

I learned these lessons as a senior, too, in my university's two-term project course. It's never fun to come up short of our hopes or expectations. But most of us do it occasionally, and never more reliably than we are first learning how to make something significant. It is good for us to realize early on our own responsibility for how we work and what we make. It empowers us to take charge of our behavior.

Black Mountain College's Lake Eden campus

The quoted passages are, with the exception of the word "program", taken from Learn by Painting, a New Yorker article about "Leap Before You Look: Black Mountain College, 1933-1957", an exhibit at the Institute of Contemporary Art in Boston. Black Mountain was a liberal arts college with a curriculum built on top of an unusual foundation: making art. Though the college lasted less than a quarter century, its effects were felt across most of art disciplines in the twentieth century. But its mission was bigger: to educate citizens, not artists, through the making art. Making something is a different learning experience from remembering something, and BMC wanted all of its graduates to have this experience.

The article was a good read throughout. It closes with a comment on Black Mountain's vision that touches on computer science and reflects my own thinking about programming. This final paragraph begins with a slight indignity to us in CS but turns quickly into an admiration:

People who teach in the traditional liberal-arts fields today are sometimes aghast at the avidity with which undergraduates flock to courses in tech fields, like computer science. Maybe those students see dollar signs in coding. Why shouldn't they? Right now, tech is where value is being created, as they say. But maybe students are also excited to take courses in which knowing and making are part of the same learning process. Those tech courses are hands-on, collaborative, materials-based (well, virtual materials), and experimental -- a digital Black Mountain curriculum.

When I meet with prospective students and their parents, I stress that, while computer science is technical, it is not vocational. It's more. Many high school students sense this already. What attracts them to the major is a desire to make things: games and apps and websites and .... Earning potential appeals to some of them, of course, but students and parents alike seem more interested in something else that CS offers them: the ability to make things that matter in the modern world. They want to create.

The good news suggested in "Learn by Painting", drawing on the Black Mountain College experiment, is that learning by making things is more than just that. It is a different and, in most ways, more meaningful way to learn about the world. It also teaches you a lot about yourself.

I hope that at least a few of my students got that out of their project course with me, in addition to whatever they learned about compilers.

~~~~

IMAGE. The main building of the former Black Mountain College, on the grounds of Camp Rockmont, a summer camp for boys. Courtesy of Wikipedia. Public domain.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 21, 2016 2:31 PM

Retaining a Sense of Wonder

A friend of mine recently shared a link to Radio Garden on a mailing list (remember those?), and in the ensuing conversation, another friend wrote:

I remember when I was a kid playing with my Dad's shortwave radio and just being flabbergasted when late one night I tuned in a station from Peru. Today you can get on your computer and communicate instantly with any spot on the globe, and that engenders no sense of wonder at all.

Such is the nature of advancing technology. Everyone becomes acclimated to amazing new things, and pretty soon they aren't even things any more.

Teachers face a particularly troublesome version of this phenomenon. Teach a subject for a few years, and pretty soon it loses its magic for you. It's all new to your students, though, and if you can let them help you see it through their eyes, you can stay fresh. The danger, though, is that it starts to look pretty ordinary to you, even boring, and you have a hard time helping them feel the magic.

If you read this blog much, you know that I'm pretty easy to amuse and pretty easy to make happy. Even so, I have to guard against taking life and computer science for granted.

Earlier this week, I was reading one reading one of the newer tutorials in Matthew Butterick's Beautiful Racket, Imagine a language: wires. In it, he builds a DSL to solve one of the problems in the 2015 edition of Advent of Code, Some Assembly Required. The problem is fun, specifying a circuit in terms of a small set of operations for wires and gates. Butterick's approach to solving it is fun, too: creating a DSL that treats the specification of a circuit as a program to interpret.

This is no big deal to a jaded old computer scientist, but remember -- or imagine -- what this solution must seem like to a non-computer scientist or to a CS student encountering the study of programming languages for the first time. With a suitable interpreter, every dataset is a program. If that isn't amazing enough, some wires datasets introduce sequencing problems, because the inputs to a gate are defined in the program after the gate. Butterick uses a simple little trick: define wires and gates as functions, not data. This simple little trick is really a big idea in disguise: Functions defer computation. Now circuit programs can be written in any order and executed on demand.

Even after all these years, computing's most mundane ideas can still astonish me sometimes. I am trying to keep my sense of wonder high and to renew it whenever it starts to flag. This is good for me, and good for my students.

~~~~

P.S. As always, I recommend Beautiful Racket, and Matthew Butterick's work more generally, quite highly. He has a nice way of teaching useful ideas in a way that appreciates their beauty.

P.P.S. The working title of this entry was "Paging Louis C.K., Paging Louis C.K." That reference may be a bit dated by now, but still it made me smile.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 16, 2016 2:14 PM

Language and Thinking

Earlier this week, Rands tweeted:

Tinkering is a deceptively high value activity.

... to which I followed up:

Which is why a language that enables tinkering is a deceptively high value tool.

I thought about these ideas a couple of days later when I read The Running Conversation in Your Head and came across this paragraph:

The idea is not that you need language for thinking but that when language comes along, it sure is useful. It changes the way you think, it allows you to operate in different ways because you can use the words as tools.

This is how I think about programming in general and about new, and better, programming languages in particular. A programmer can think quite well in just about any language. Many of us cut our teeth in BASIC, and simply learning how to think computationally allowed us to think differently than we did before. But then we learn a radically different or more powerful language, and suddenly we are able to think new thoughts, thoughts we didn't even conceive of in quite the same way before.

It's not that we need the new language in order to think, but when it comes along, it allows us to operate in different ways. New concepts become new tools.

I am looking forward to introducing Racket and functional programming to a new group of students this spring semester. First-class functions and higher-order functions can change how students think about the most basic computations such as loops and about higher-level techniques such as OOP. I hope to do a better job this time around helping them see the ways in which it really is different.

To echo the Running Conversation article again, when we learn a new programming style or language, "Something really special is created. And the thing that is created might well be unique in the universe."


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 12, 2016 3:15 PM

Computer Science Is Not That Special

I'm reminded of a student I met with once who told me that he planned to go to law school, and then a few minutes later, when going over a draft of a lab report, said "Yeah... Grammar isn't really my thing." Explaining why I busted up laughing took a while.

When I ask prospective students why they decided not to pursue a CS degree, they often say things to the effect of "Computer science seemed cool, but I heard getting a degree in CS was a lot of work." or "A buddy of mine told me that programming is tedious." Sometimes, I meet these students as they return to the university to get a second degree -- in computer science. Their reasons for returning vary from the economic (a desire for better career opportunities) to personal (a desire to do something that they have always wanted to do, or to pursue a newfound creative interest).

After you've been in the working world a while, a little hard work and some occasional tedium don't seem like deal breakers any more.

Such conversations were on my mind as I read physicist Chad Orzel's recent Science Is Not THAT Special. In this article, Orzel responds to the conventional wisdom that becoming a scientist and doing science involve a lot of hard work that is unlike the exciting stuff that draws kids to science in the first place. Then, when kids encounter the drudgery and hard work, they turn away from science as a potential career.

Orzel's takedown of this idea is spot on. (The quoted passage above is one of the article's lighter moments in confronting the stereotype.) Sure, doing science involves a lot of tedium, but this problem is not unique to science. Getting good at anything requires a lot of hard work and tedious attention to detail. Every job, every area of expertise, has its moments of drudgery. Even the rare few who become professional athletes and artists, with careers generally thought of as dreams that enable people to earn a living doing the thing they love, spend endless hours engaged in the drudgery of practicing technique and automatizing physical actions that become their professional vocabulary.

Why do we act as if science is any different, or should be?

Computer science gets this rap, too. What could be worse than fighting with a compiler to accept a program while you are learning to code? Or plowing threw reams of poorly documented API descriptions to plug your code into someone's e-commerce system?

Personally, I can think of lots of things that are worse. I am under no illusion, however, that other professionals are somehow shielded from such negative experiences. I just prefer my pains to theirs.

Maybe some people don't like certain kinds of drudgery. That's fair. Sometimes we gravitate toward the things whose drudgery we don't mind, and sometimes we come to accept the drudgery of the things we love to do. I'm not sure which explains my fascination with programming. I certainly enjoy the drudgery of computer science more than that of most other activities -- or at least I suffer it more gladly.

I'm with Orzel. Let's be honest with ourselves and our students that getting good at anything takes a lot of hard work and, once you master something, you'll occasionally face some tedium in the trenches. Science, and computer science in particular, are not that much different from anything else.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 09, 2016 1:54 PM

Two Quick Hits with a Mathematical Flavor

I've been wanting to write a blog entry or two lately about my compiler course and about papers I've read recently, but I've not managed to free up much time as semester winds down. That's one of the problems with having Big Ideas to write about: they seem daunting and, at the very least, take time to work through.

So instead here are two brief notes about articles that crossed my newsfeed recently and planted themselves in my mind. Perhaps you will enjoy them even without much commentary from me.

A Student's Unusual Proof Might Be A Better Proof

I asked a student to show that between any two rationals is a rational.

She did the following: if x < y are rational then take δ << y-x and rational and use x+δ.

I love the student's two proofs in article! Student programmers are similarly creative. Their unusual solutions often expose biases in my thinking and give me way to think about a problem. If nothing else, they help to understand better how students think about ideas that I take for granted.

Numberless Word Problems

Some girls entered a school art competition. Fewer boys than girls entered the competition.

She projected her screen and asked, "What math do you see in this problem?"

Pregnant pause.

"There isn't any math. There aren't any numbers."

I am fascinated by the possibility of adapting this idea to teaching students to think like a programmer. In an intro course, for example, students struggle with computational ideas such as loops and functions even though they have a lot of experience with these ideas embodied in their daily lives. Perhaps the language we use gets in the way of them developing their own algorithmic skills. Maybe I could use computationless word problems to get them started?

I'm giving serious thought to ways I might use this approach to help students learn functional programming in my Programming Languages course this spring. The authors describes how to write numberless word problems, and I'm wondering how I might bring the philosophy to computer science. If you have any ideas, please share!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

December 05, 2016 2:42 PM

Copying Interfaces, Copying Code

Khoi Vinh wrote a short blog entry called The Underestimated Merits of Copying Someone Else's Work that reminds us how valuable copying others' work, a standard practice in the arts, can be. At the lowest level there is copying at the textual level. Sometimes, the value is mental or mechanical:

Hunter S. Thompson famously re-typed, word for word, F. Scott Fitzgerald's "The Great Gatsby" just to learn how it was done.

This made me think back to the days when people typed up code they found in Byte magazine and other periodicals. Of course, typing a program gave you more than practice typing or a sense of what it was like to type that much; it also gave you a working program that you could use and tinker with. I don't know if anyone would ever copy a short story or novel by hand so that they could morph it into something new, but we can do that meaningfully with code.

I missed the "copy code from Byte" phase of computing. My family never had a home computer, and by the time I got to college and changed my major to CS, I had plenty of new programs to write. I pulled ideas about chess-playing programs and other programs I wanted to write from books and magazines, but I never typed up an entire program's source code. (I mention one of my first personal projects in an old OOPSLA workshop report.)

I don't hear much these days about people copying code keystroke for keystroke. Zed Shaw has championed this idea in a series of introductory programming books such as Learn Python The Hard Way. There is probably something to be learned by copying code Hunter Thompson-style, feeling the rhythm of syntax and format by repetition, and soaking up semantics along the way.

Vinh has a more interesting sort of copying in mind, though: copying the interface of a software product:

It's odd then to realize that copying product interfaces is such an uncommon learning technique in design. ... it's even easier to re-create designs than it is to re-create other forms of art. With a painting or sculpture, it's often difficult to get access to the historically accurate tools and materials that were used to create the original. With today's product design, the tools are readily available; most of us already own the exact same software employed to create any of the most prominent product designs you could name.

This idea generalizes beyond interfaces to any program for which we don't have source code. We often talk about reverse engineering a program, but in my experience this usually refers to creating a program that behaves "just like" the original. Copying an interface pixel by pixel, like copying a program or novel character by character, requires the artist to attend to the smallest details -- to create an exact replica, not a similar work.

We cannot reverse engineer a program and arrive at identical source code, of course, but we can try to replicate behavior and interface exactly. Doing so might help a person appreciate the details of code more. Such a practice might even help a programmer learn the craft of programming in a different way.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 27, 2016 10:26 AM

Good Teaching Is Grounded In Generosity Of Spirit

In My Writing Education: A Time Line, George Saunders recounts stories of his interactions with writing teachers over the years, first in the creative writing program at Syracuse and later as a writer and teacher himself. Along the way, he shows us some of the ways that our best teachers move us.

Here, the teacher gets a bad review:

Doug gets an unkind review. We are worried. Will one of us dopily bring it up in workshop? We don't. Doug does. Right off the bat. He wants to talk about it, because he feels there might be something in it for us. The talk he gives us is beautiful, honest, courageous, totally generous. He shows us where the reviewer was wrong -- but also where the reviewer might have gotten it right. Doug talks about the importance of being able to extract the useful bits from even a hurtful review: this is important, because it will make the next book better. He talks about the fact that it was hard for him to get up this morning after that review and write, but that he did it anyway. He's in it for the long haul, we can see.

I know some faculty who basically ignore student assessments of their teaching. They paid attention for a while at the beginning of their careers, but it hurt too much, so they stopped. Most of the good teachers I know, though, approach their student assessments the way that Doug approaches his bad review: they look for the truths in the reviews, take those truths seriously, and use them to get better. Yes, a bad set of assessments hurts. But if you are in it for the long haul, you get back to work.

Here, the teacher gives a bad review:

What Doug does for me in this meeting is respect me, by declining to hyperbolize my crap thesis. I don't remember what he said about it, but what he did not say was, you know: "Amazing, you did a great job, this is publishable, you rocked our world with this! Loved the elephant." There's this theory that self-esteem has to do with getting confirmation from the outside world that our perceptions are fundamentally accurate. What Doug does at this meeting is increase my self-esteem by confirming that my perception of the work I'd been doing is fundamentally accurate. The work I've been doing is bad. Or, worse: it's blah. This is uplifting -- liberating, even -- to have my unspoken opinion of my work confirmed. I don't have to pretend bad is good. This frees me to leave it behind and move on and try to do something better. The main thing I feel: respected.

Sometimes, students make their best effort but come up short. They deserve the respect of an honest review. Honest doesn't have to be harsh; there is a difference between being honest and being a jerk. Sometimes, students don't make their best effort, and they deserve the respect of an honest review, too. Again, being honest doesn't mean being harsh. In my experience, most students appreciate an honest, objective review of their work. They almost always know when they are coming up short, or when they aren't working hard enough. When a teacher confirms that knowledge, they are freed -- or motivated in a new way -- to move forward.

Here, the teacher reads student work:

I am teaching at Syracuse myself now. Toby, Arthur Flowers, and I are reading that year's admissions materials. Toby reads every page of every story in every application, even the ones we are almost certainly rejecting, and never fails to find a nice moment, even when it occurs on the last page of the last story of a doomed application. "Remember that beautiful description of a sailboat on around page 29 of the third piece?" he'll say. And Arthur and I will say: "Uh, yeah ... that was ... a really cool sailboat." Toby has a kind of photographic memory re stories, and such a love for the form that goodness, no matter where it's found or what it's surrounded by, seems to excite his enthusiasm. Again, that same lesson: good teaching is grounded in generosity of spirit.

It has taken me a long time as a teacher to learn to have Toby's mindset when reading student work, and I'm still learning. Over the last few years, I've noticed myself trying more deliberately to find the nice moments in students' programs, even the bad ones, and to tell students about them. That doesn't mean being dishonest about the quality of the overall program. But nice moments are worth celebrating, wherever they are found. Sometimes, those are precisely the elements students need to hear about, because they are the building blocks for getting better.

Finally, here is the teacher talking about his own craft:

During the Q&A someone asks what Toby would do if he couldn't be a writer.
A long, perplexed pause.
"I would be very sad", he finally says.

I like teaching computer science, but what has enabled me to stay in the classroom for so many years and given me the stamina to get better at teaching is that I like doing computer science. I like to program. I like to solve problems. I like to find abstractions and look for ways to solve other problems. There are many things I could do if I were not a computer scientist, but knowing what I know now, I would be a little sad.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

November 20, 2016 10:17 AM

Celebrating a Friend's Success

Wade Arnold accepting the Young Alumni Award from UNI President Jim Wohlpart

Last week, I read a blog entry by Ben Thompson that said Influence lives at intersections. Thompson was echoing a comment about Daniel Kahneman's career: "Intellectual influence is the ability to dissolve disciplinary boundaries." These were timely references for my week.

On Friday night, I had the pleasure of attending the Heritage Honours Awards, an annual awards dinner hosted by my university's alumni association. One of our alumni, Wade Arnold, received the Young Alumni Award for demonstrated success early in a career. I mentioned Wade in a blog entry several years ago, when he and I spoke together at a seminar on interactive digital technologies. That day, Wade talked about intersections:

It is difficult to be the best at any one thing, but if you are very good at two or three or five, then you can be the best in a particular market niche. The power of the intersection.

Wade built his company, Banno, by becoming very good at several things, including functional programming, computing infrastructure, web development, mobile development, and financial technology. He was foresightful and lucky enough to develop this combination of strengths before most other people did. Most important, though, he worked really hard to build his company: a company that people wanted to work with, and a company that people wanted to work for. As a result, he was able to grow a successful start-up in a small university town in the middle of the country.

It's been a delight for me to know Wade all these years and watch him do his thing. I'll bet he has some interesting ideas in store for the future.

The dinner also provided me with some unexpected feelings. Several times over the course of the evening, someone said, "Dr. Wallingford -- I feel like I know you." I had the pleasure of meeting Wade's parents, who said kind things about my influence on their son. Even his nine-year-old son said, "My dad was talking about you in the car on the drive over." No one was confused about whom we were there to honor Friday night, about who had done the considerable work to build himself into an admirable man and founder. That was all Wade. But my experience that night is a small reminder to all you teachers out there: you do have an effect on people. It was certainly a welcome reminder for me at the end of a trying semester.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

October 22, 2016 2:00 PM

Competence and Creating Conditions that Minimize Mistakes

I enjoyed this interview with Atul Gawande by Ezra Klein. When talking about making mistakes, Gawande notes that humans have enough knowledge to cut way down on errors in many disciplines, but we do not always use that knowledge effectively. Mistakes come naturally from the environments in which we work:

We're all set up for failure under the conditions of complexity.

Mistakes are often more a matter of discipline and attention to detail than a matter of knowledge or understanding. Klein captures the essence of Gawande's lesson in one of his questions:

We have this idea that competence is not making mistakes and getting everything right. [But really...] Competence is knowing you will make mistakes and setting up a context that will help reduce the possibility of error but also help deal with the aftermath of error.

In my experience, this is a hard lesson for computer science students to grok. It's okay to make mistakes, but create conditions where you make as few as possible and in which you can recognize and deal with the mistakes as quickly as possible. High-discipline practices such as test-first and pair programming, version control, and automated builds make a lot more sense when you see them from this perspective.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Software Development, Teaching and Learning

October 17, 2016 4:10 PM

Using Programming to Learn Math, Even at the University

There is an open thread on the SIGCSE mailing list called "Forget the language wars, try the math wars". Faculty are discussing how to justify math requirements on a CS major, especially for students who "just want to be programmers". Some people argue that math requirements are a barrier to recruiting students who can succeed in computer science, in particular calculus.

Somewhere along the line, Cay Horstmann wrote a couple of things I agree with. First, he said that he didn't want to defend the calculus requirement because most calculus courses do not teach students how to think "but how to follow recipes". I have long had this complaint about calculus, especially as it's taught in most US high schools and universities. Then he wrote something more positive:

What I would like to see is teaching math alongside with programming. None of my students were able to tell me what sine and cosine were good for, but when they had to program a dial [in a user interface], they said "oh".

Couldn't that "oh" have come earlier in their lives? Why don't students do programming in middle school math? I am not talking large programs--just a few lines, so that they can build models and intuition.

I agree wholeheartedly. And even if students do not have such experiences in their K-12 math classes, the least we could do help them have that "oh" experience earlier in their university studies.

My colleagues and I have been discussing our Discrete Structures course now for a few weeks, including expected outcomes, its role as a prerequisite to other courses, and how we teach it. I have suggested that one of the best ways to learn discrete math is to connect it with programs. At our university, students have taken at least one semester of programming (currently, in Python) before they take Discrete. We should use that to our advantage!

A program can help make an abstract idea concrete. When learning about set operations, why do only paper-and-pencil exercises when you can use simple Python expressions in the REPL? Yes, adding programming to the mix creates new issues to deal with, but if designed well, such instruction could both improve students' understanding of discrete structures -- as Horstmann says, helping them build models and intuition -- and give students more practice writing simple programs. An ancillary benefit might be to help students see that computer scientists can use computation to help them learn new things, thus preparing for habits that can extend to wider settings.

Unfortunately, the most popular Discrete Structures textbooks don't help much. They do try to use CS-centric examples, but they don't seem otherwise to use the fact that students are CS majors. I don't really blame them. They are writing for a market in which students study many different languages in CS 1, so they can't (and shouldn't) assume any particular programming language background. Even worse, the Discrete Structures course appears at different places throughout the CS curriculum, which means that textbooks can't assume even any particular non-language CS experience.

Returning to Horstmann's suggestion to augment math instruction with programming in K-12, there is, of course, a strong movement nationally to teach computer science in high school. My state has been disappointingly slow to get on board, but we are finally seeing action. But most of the focus in this nationwide movement is on teaching CS qua CS, with less interest in emphasis on integrating CS into math and other core courses.

For this reason, let us again take a moment to thank the people behind the Bootstrap project for leading the charge in this regard, helping teachers use programming in Racket to teach algebra and other core topics. They are even evaluating the efficacy of the work and showing that the curriculum works. This may not surprise us in CS, but empirical evidence of success is essential if we hope to get teacher prep programs and state boards of education to take the idea seriously.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 25, 2016 9:40 AM

There Is Only One Culture, But Two Languages

W.H. Auden, in A Certain World, on the idea of The Two Cultures:

Of course, there is only one. Of course, the natural sciences are just as "humane" as letters. There are, however, two languages, the spoken verbal language of literature, and the written sign language of mathematics, which is the language of science. This puts the scientist at a great advantage, for, since like all of us he has learned to read and write, he can understand a poem or a novel, whereas there are very few men of letters who can understand a scientific paper once they come to the mathematical parts.

When I was a boy, we were taught the literary languages, like Latin and Greek, extremely well, but mathematics atrociously badly. Beginning with the multiplication table, we learned a series of operations by rote which, if remembered correctly, gave the "right" answer, but about any basic principles, like the concept of number, we were told nothing. Typical of the teaching methods then in vogue is the mnemonic which I had to learn.
Minus times Minus equals Plus:
The reason for this we need not discuss.

Sadly, we still teach young people that it's okay if math and science are too hard to master. They grow into adults who feel a chasm between "arts and letters" and "math and science". But as Auden notes rightly, there is no chasm; there is mostly just another language to learn and appreciate.

(It may be some consolation to Auden that we've reached a point where most scientists have to work to understand papers written by scientists in other disciplines. They are written in highly specialized languages.)

In my experience, it is more acceptable for a humanities person to say "I'm not a science person" or "I don't like math" than for a scientist to say something similar about literature, art, or music. The latter person is thought, silently, to be a Philistine; the former, an educated person with a specialty.

I've often wondered if this experience suffers from observation bias or association bias. It may well. I certainly know artists and writers who have mastered both languages and who remain intensely curious about questions that span the supposed chasm between their specialties and mine. I'm interested in those questions, too.

Even with this asymmetry, the presumed chasm between cultures creates low expectations for us scientists. Whenever my friends in the humanities find out that I've read all of Kafka's novels and short stories; that Rosencrantz and Guildenstern Are Dead is my favorite play, or that I even have a favorite play; that I really enjoyed the work of choreographer Merce Cunningham; that my office bookshelf includes the complete works of William Shakespeare and a volume of William Blake's poetry -- I love the romantics! -- most seem genuinely surprised. "You're a computer scientist, right?" (Yes, I like Asimov, Heinlein, Clarke, and Bradbury, too.)

Auden attributes his illiteracy in the language of mathematics and science to bad education. The good news is that we can reduce, if not eliminate, the language gap by teaching both languages well. This is a challenge for both parents and schools and will take time. Change is hard, especially when it involves the ways we talk about the world.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

September 18, 2016 3:49 PM

Talking Shop

a photo of the Blueridge Orchard, visited by cyclists on the Cedar Valley Farm Ride

I agree with W.H. Auden:

Who on earth invented the silly convention that it is boring or impolite to talk shop? Nothing is more interesting to listen to, especially if the shop is not one's own.

My wife went on a forty-mile bike ride this morning, a fundraiser for the Cedar Valley Bicycle Collective, which visited three local farms. At those stops, I had the great fortune to listen to folks on all three farms talk shop. We learned about making ice cream and woodcarving at Three Pines Farm. We learned about selecting, growing, and picking apples -- and the damage hail and bugs can do -- at Blueridge Orchard. And the owner of the Fitkin Popcorn Farm talked about the popcorn business. He showed us the machines they use to sort the corn out of the field, first by size and then by density. He also talked about planting fields, harvesting the corn, and selling the product nationally. I even learned that we can pop the corn while it's still on the ears! (This will happen in my house very soon.)

I love to listen to people talk shop. In unguarded moments, they speak honestly about something they love and know deeply. They let us in on what it is like for them to work in their corner of the world. However big I try to make my world, there is so much more out there to learn.

The Auden passage is from his book A Certain World, a collage of poems, quotes, and short pieces from other writers with occasional comments of his own. Auden would have been an eclectic blogger! This book feels like a Tumblr blog, without all the pictures and 'likes'. Some of the passages are out of date, but they let us peak in on the mind of an accomplished poet. A little like good shop talk.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

September 02, 2016 4:12 PM

Be Open to What Students Think -- For Real

On the first day of my compiler class this fall, I had my students fill out a short survey to help me set the stage for the course. After I asked them to list the the three or four programming languages they know best, I asked them:

  • Based on your experience using these languages, list them in order from easiest to use to hardest to use.
  • Given what you know about them, list these languages in order from easiest to compile to hardest to compile.
We then used their answers to unpack what "easy" and "hard" mean in these contexts, and what it would mean even to be answer these questions.

While they were completing the survey, one student raised his hand and asked, "When you easy or hard to compile, do you mean for the programmer or the compiler?" I laughed almost immediately.

Fortunately, I've had all these students in class before, and they know that I'm not a mean-spirited person. Even so, I just as quickly apologized for laughing and let him know that I wasn't laughing at the question so much as laughing at my own surprise: It had never occurred to me that someone might interpret the question in that way!

I realized, though, that from a student's perspective, getting a Python program to the point of runnability is a very different thing from getting, say, a Java or Ada program to the point of runnability. For a beginner, to get his or her first few Ada programs to compile is indeed a chore. I remember feeling the same way when I learned Haskell as a relatively experienced professor and programmer, many years after I had last been a student in a classroom.

This story came to mind as I read Required Reading for Math Teachers this morning. It's actually pretty good reading for teachers of any subject. Toward the end of the post, the author reminds us that it helps for teachers to be legitimately open to students' thought processes, whether or not they think what we think they should be thinking. In fact, those are precisely the moments when we want to be most open to what they are thinking. These are the moments that help us to diagnose errors in their thinking -- and in ours.

This passage resonated with my experience:

I have throughout my career been repeatedly surprised by the discovery that nearly every time a student offers an idea authentically (i.e. not as just a random guess), it makes some sort of sense. Maybe not complete sense, and maybe it's not at all where I was headed. But if I can curb my initial reaction of "this kid is totally confused" long enough to actually take in the train of thought, there is almost uniformly some worthwhile reasoning inside it. Then even if I need to say "we're going to stick to the topic", I can do so after acknowledging the reasoning.

Acknowledging students' ideas and thinking is a matter of basic respect, but it also plays a big role in the environment we create in our classes. I hope that I have been respectful and open enough with these students in the past that my question-asker could trust that I wan't mocking him and that I was genuinely surprised. We all learned something that day.

As that blog post goes on to say, we have to make sure that the questions we ask students are legitimate questions. We communicate this intention by acknowledging people when they treat our questions as legitimate. We teachers need to treat our student's questions the same way.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 30, 2016 3:36 PM

Things People Should Know About Being a Teacher

In 7 things I wish people understood about being a teacher, Andrew Simmons captures a few of the things that make teaching so rewarding and so challenging. If you don't understand these truths, you may not understand why anyone would want to teach. You also run the risk of misunderstanding the problems with our education system and thus supporting changes that are unlikely to fix them. Check it out.

Even though Simmons writes of teaching high school, most of what he says applies just as well to university professors. I especially liked this comment, on what software developers call sustainable pace

... would-be superteachers are smart, sometimes masochistic 23-year-olds working 18-hour days to pump up test scores for a few years before moving on to administrative positions, law school, or nervous breakdowns. They embrace an unsustainable load.

I used to wonder why so many good teachers ended up leaving the classroom. One reason is burn-out. Universities burn out researchers and teachers alike by pushing them onto a treadmill, or by letting them get on and stay there of their own volition. Good teaching can happen every year, if we learn how to maintain balance.

My favorite line of the article, though, is this gem:

Everything I learn is filtered through the possibility that it might be taught.

When I read that line, I nodded my head silently. This guy really is a teacher.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 21, 2016 10:23 AM

Parnas and Software Patterns

Earlier this week, I tweeted a paraphrase of David Parnas that a few people liked:

Parnas observes: "Professional Engineers rarely use 'engineer' as a verb." And it's not because they are busy 'architecting' systems.

The paraphrase comes from Parnas's On ICSE's "Most Influential" Papers, which appears in the July 1995 issue so ACM SIGSOFT's Software Engineering Notes. He wrote that paper in conjunction with his acceptance speech on receiving the Most Influential Paper award at ICSE 17. It's a good read on how the software engineering research community's influence was, at least at that time, limited to other researchers. Parnas asserts that researchers should strive to influence practitioners, the people who are actually writing software.

Why doesn't software engineering research influence practitioners? It's simple:

Computer professionals do not read our literature because it does not help them to do their job better.

In a section called "We are talking to ourselves", Parnas explains why the software engineering literature fails to connect with people who write software:

Most of our papers are written in the tradition of science, not engineering. We write up the results of our research for readers who are also researchers, not for practitioners. We base our papers on previous work done by other researchers, and often ignore current practical problems. In many other engineering fields, the results of research are reported in "How to" papers. Our papers are "This is what I did" papers. We describe our work, how our work differs from other people's work, what we will do next, etc. This is quite appropriate for papers directed to other researchers, but it is not what is needed in papers that are intended to influence practitioners. Practitioners are far more interested in being told the basic assumptions behind the work, than in knowing how this work differs from the work by the author's cohorts in the research game.

Around the time Parnas wrote this article and gave his acceptance speech at ICSE 17, the Pattern Languages of Programs conferences were taking off, with a similar motivation: to create a literature by and for software practitioners. Patterns describe how to create programs in practical terms. They describe techniques, but also the context in which they work, the forces that make them more and less applicable, and the patterns you can use to address the issues that arise after you the technique. The community encouraged writing in a straightforward style, using the vocabulary of professional developers.

At the early PLoP conferences, you could feel the tension between practitioners and academics, some of which grew out of the academic style of writing and the traditions of the scientific literature. I had to learn a lot about how to write for an audience of software developers. Fortunately, the people in the PLoP community took the time to help me get better. I have fond memories of receiving feedback from Frank Buschman, Peter Sommerlad, Ralph Johnson, Ward Cunningham, Kyle Brown, Ken, Auer, and many others. The feedback wasn't always easy to internalize -- it's hard to change! -- but it was necessary.

I'm not sure that an appreciably larger number of academics in software engineering and computer science more broadly write for the wider community of software practitioners these days than when Parnas made his remarks. There are certainly more venues available to us from patterns-related conferences, separate tracks at conferences like SPLASH, and blogs. Unfortunately, the academic reward structure isn't always friendly to this kind of writing, especially early in one's career. Some universities have begun to open up their definition of scholarship, though, which creates new opportunities.

At their best, software patterns are exactly what Parnas calls for: creating a how-to literature aimed at practitioners. Researchers and practitioners can both participate in this endeavor.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

August 14, 2016 10:55 AM

Hemingway on Teachers, While Teaching

Ernest Hemingway sitting on a dock next to his boat, Pilar, in the 1930s

Early in Arnold Samuelson's With Hemingway: A Year in Key West and Cuba, Papa is giving an impromptu lecture about writing to two aspiring young writers. (He does that a lot in the book, whenever the men are out sailing and fishing.) This particular lecture was prompted by what he thought was bad advice in a book by a successful pulp fiction author on how to get started as a writer. An earlier session had focused on the shortcomings of going to college to learn how to become a writer.

Toward the end of his discourse, Hemingway tells the young writers to do daily writing exercise and generously offers to read read their work, giving feedback on how to get better. This offer elicits a few more remarks about the idea of college writing professors:

"They ought to have me teach some of those college classes. I could teach them something. Most professors of English composition can tell the students what's wrong with their pieces but they don't know how to make them good, because, if they knew that, they'd be writers themselves and they wouldn't have to teach."

"What do you think of the life of a professor?"

"All right for a person who is vain and likes to have the adulation of his students. Well, now, do you fellows think you can remember everything Professor Hemingway has told you? Are you prepared for a written examination on the lecture?"

Teaching computer science must be different from teaching fiction writing. I have been teaching for quite a few years now and have never received any adulation. Then again, though, I've never experienced much derision either. My students seems to develop a narrower set of emotions. Some seem to like me quite a bit and look for chances to take another course with me. Other students are... indifferent. To them, I'm just the guy standing in the way of them getting to somewhere else they want to be.

Hemingway's "have to teach" dig is cliché. Perhaps the Ernest Hemingways and Scott Fitzgeralds of the world should be devoting all of their time to writing, but there have a been any number of excellent authors who have supplemented their incomes and filled the down time between creative bursts by helping other writers find a path for themselves. Samuelson's book itself is a testament to how much Papa loved to share his wisdom and to help newcomers find their footing in a tough business. During all those hours at sea, Hemingway was teaching.

Still, I understand what Hemingway means when he speaks of the difference between knowing that something is bad and knowing how to make something good. One of the biggest challenges I faced in my early years as a professor was figuring out how to go beyond pointing out errors and weaknesses in my students' code to giving them concrete advice on how two design and write good programs. I'm still learning how to do that.

I'm lucky that I like to write programs myself. Writing code and learning new styles and languages is the only way to stay sharp. Perhaps if I were really good, I'd leave academia and build systems for Google or some hot start-up, as Hemingway would have it. I'm certainly under no illusion that I can simulate that kind of experience working at a university. But I do think a person can both do and teach, and that the best teachers are ones who take both seriously. In computer science, it is a constant challenge to keep up with students who are pushing ahead into a world that keeps changing.

~~~~

The photo above comes from the John F. Kennedy Presidential Library and Museum. It shows Hemingway sitting on a dock next to his boat, Pilar, sometime in the 1930s. The conversation quoted above took place on the Pilar in 1934.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 07, 2016 10:36 AM

Some Advice for How To Think, and Some Personal Memories

I've been reading a bunch of the essays on David Chapman's Meaningness website lately, after seeing a link to one on Twitter. (Thanks, @kaledic.) This morning I read How To Think Real Good, about one of Chapman's abandoned projects: a book of advice for how to think and solve problems. He may never write this book as he once imagined it, but I'm glad he wrote this essay about the idea.

First of all, it was a fun read, at least for me. Chapman is a former AI researcher, and some of the stories he tells remind me of things I experienced when I was in AI. We were even in school at about the same time, though in different parts of the country and different kinds of program. His work was much more important than mine, but I think at some fundamental level most people in AI share common dreams and goals. It was fun to listen as Chapman reminisced about knowledge and AI.

He also introduced me to the dandy portmanteau anvilicious. I keep learning new words! There are so many good ones, and people make up the ones that don't exist already.

My enjoyment was heightened by the fact that the essay stimulated the parts of my brain that like to think about thinking. Chapman includes a few of the heuristics that he intended to include in his book, along with anecdotes that illustrate or motivate them. Here are three:

All problem formulations are "false", because they abstract away details of reality.

Solve a simplified version of the problem first. If you can't do even that, you're in trouble.

Probability theory is sometimes an excellent way of dealing with uncertainty, but it's not the only way, and sometimes it's a terrible way.

He elaborates on the last of these, pointing out that probability theory tends to collapse many different kinds of uncertainty into a single value. This does not work all that well in practice, because different kinds of uncertainty often need to be handles in very different ways.

Chapman has a lot to say about probability. This essay was prompted by what he sees as an over-reliance of the rationalist community on a pop version of Bayesianism as its foundation for reasoning. But as an old AI researcher, he knows that an idea can sound good and fail in practice for all sorts of reasons. He has also seen how a computer program can make clear exactly what does and doesn't work.

Artificial intelligence has always played a useful role as a reality check on ideas about mind, knowledge, reasoning, and thought. More generally, anyone who writes computer programs knows this, too. You can make ambiguous claims with English sentences, but to write a program you really have to have a precise idea. When you don't have a precise idea, your program itself is a precise formulation of something. Figuring out what that is can be a way of figuring out what you were really thing about in the first place.

This is one of the most important lessons college students learn from their intro CS courses. It's an experience that can benefit all students, not just CS majors.

Chapman also includes a few heuristics for approaching the problem of thinking, basically ways to put yourself in a position to become a better thinker. Two of my favorites are:

Try to figure out how people smarter than you think.

Find a teacher who is willing to go meta and explain how a field works, instead of lecturing you on its subject matter.

This really is good advice. Subject matter is much easier to come by than deep understanding of how the discipline work, especially in these days of the web.

The word meta appears frequently throughout this essay. (I love that the essay is posted on the metablog/ portion of his site!) Chapman's project is thinking about thinking, a step up the ladder of abstraction from "simply" thinking. An AI program must reason; an AI researcher must reason about how to reason.

This is the great siren of artificial intelligence, the source of its power and also its weaknesses: Anything you can do, I can do meta.

I think this gets at why I enjoyed this essay so much. AI is ultimately the discipline of applied epistemology, and most of us who are lured into AI's arms share an interest in what it means to speak of knowledge. If we really understand knowledge, then we ought to be able to write a computer program that implements that understanding. And if we do, how can we say that our computer program isn't doing essentially the same thing that makes us humans intelligent?

As much as I love computer science and programming, my favorite course in graduate school was an epistemology course I took with Prof. Rich Hall. It drove straight to the core curiosity that impelled me to study AI in the first place. In the first week of the course, Prof. Hall laid out the notion of justified true belief, and from there I was hooked.

A lot of AI starts with a naive feeling of this sort, whether explicitly stated or not. Doing AI research brings that feeling into contact with reality. Then things gets serious. It's all enormously stimulating.

Ultimately Chapman left the field, disillusioned by what he saw as a fundamental limitation that AI's bag of tricks could never resolve. Even so, the questions that led him to AI still motivate him and his current work, which is good for all of us, I think.

This essay brought back a lot of pleasant memories for me. Even though I, too, am no longer in AI, the questions that led me to the field still motivate me and my interests in program design, programming languages, software development, and CS education. It is hard to escape the questions of what it means to think and how we can do it better. These remain central problems of what it means to be human.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

July 28, 2016 2:37 PM

Functional Programming, Inlined Code, and a Programming Challenge

an example of the cover art for the Commander Keen series of games

I recently came across an old entry on Jonathan Blow's blog called John Carmack on Inlined Code. The bulk of the entry consists an even older email message that Carmack, lead programmer on video games such as Doom and Quake, sent to a mailing list, encouraging developers to consider inlining function calls as a matter of style. This email message is the earliest explanation I've seen of Carmack's drift toward functional programming, seeking to as many of its benefits as possible even in the harshly real-time environment of game programming.

The article is a great read, with much advice borne in the trenches of writing and testing large programs whose run-time performance is key to their success. Some of the ideas involve programming language:

It would be kind of nice if C had a "functional" keyword to enforce no global references.

... while others are more about design style:

The function that is least likely to cause a problem is one that doesn't exist, which is the benefit of inlining it.

... and still others remind us to rely on good tools to help avoid inevitable human error:

I now strongly encourage explicit loops for everything, and hope the compiler unrolls it properly.

(This one may come in handy as I prepare to teach my compiler course again this fall.)

This message-within-a-blog-entry itself quotes another email message, by Henry Spencer, which contains the seeds of a programming challenge. Spencer described a piece of flight software written in a particularly limiting style:

It disallowed both subroutine calls and backward branches, except for the one at the bottom of the main loop. Control flow went forward only. Sometimes one piece of code had to leave a note for a later piece telling it what to do, but this worked out well for testing: all data was allocated statically, and monitoring those variables gave a clear picture of most everything the software was doing.

Wow: one big loop, within which all control flows forward. To me, this sounds like a devilish challenge to take on when writing even a moderately complex program like a scanner or parser, which generally contain many loops within loops. In this regard, it reminds me of the Polymorphism Challenge's prohibition of if-statements and other forms of selection in code. The goal of that challenge was to help programmers really grok how the use of substitutable objects can lead to an entirely different style of program than we tend to create with traditional procedural programming.

Even though Carmack knew that "a great deal of stuff that goes on in the aerospace industry should not be emulated by anyone, and is often self destructive", he thought that this idea might have practical value, so he tried it out. The experience helped him evolve his programming style in a promising direction. This is a great example of the power of the pedagogical pattern known as Three Bears: take an idea to its extreme in order to learn the boundaries of its utility. Sometimes, you will find that those boundaries lie beyond what you originally thought.

Carmack's whole article is worth a read. Thanks to Jonathan Blow for preserving it for us.

~~~~

The image above is an example of the cover art for the "Commander Keen" series of video games, courtesy of Wikipedia. John Carmack was also the lead programmer for this series. What a remarkable oeuvre he has produced.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

July 27, 2016 10:58 AM

Teaching Programming Versus Teaching Art

Like many people, I am fond of analogies between software development and arts like writing and painting. It's easy to be seduced by similarities even when the daily experiences of programmers and artists are often so different.

For that reason, I was glad that this statement by sculptor Jacques Lipschutz stood out in such great relief from the text around it:

Teaching is death. If he teaches, the sculptor has to open up and reveal things that should be closed and sacred.

For me, teaching computer science has been just the opposite. Teaching forces me to open up my thinking processes. It encourages me to talk with professional developers about how they do what they do and what they think about along the way. Through these discussions, we do reveal things that sometimes feel almost sacred, but I think we all benefit from the examination. It not only helps me to teach novice developers more effectively; it also helps me to be a better programmer myself.

~~~~

(The Lipschutz passage comes from Conversations with Artists, which I quoted for the first time last week.)


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 22, 2016 11:02 AM

What Happens When We Read To Kids

A great analogy from Frank Cottrell:

Think of it, he says, the sun pours down its energy onto the surface of the planet for millennia. The leaves soak up the energy. The trees fall and turn to coal. Coal is solid sunlight, the stored memory of millions of uninhabited summers. Then one day, in Coalbrookdale, someone opens a hole in the ground and all that stored energy comes pouring out and is consumed in furnaces, engines, motors.

When we -- teachers, parents, carers, friends -- read to our children, I believe that's what we're doing. Laying down strata of fuel, fuel studded with fossils and treasures. If we ask for anything back, we burn it off too soon.

My wife and I surely did a lot of things wrong as we raised our daughters, but I think we did at least two things right: we read to them all the time, and we talked to them like we talk to everyone else. Their ability to speak and reason and imagine grew out of those simple, respectful acts.

Teaching at a university creates an upside-down dynamic by comparison, especially in a discipline many think of as being about jobs. It is the students and parents who are more likely to focus on the utility of knowledge. Students sometimes ask, "When will we use this in industry?" With the cost of tuition and the uncertainty of the times, I understand their concern. Even so, there are times I would like to say "I don't know" or, in my weaker moments, the seemingly disrespectful "I don't care". Something more important should be happening here. We are creating fuel for a lifetime.

(The Cottrell article takes an unusual route to an interesting idea. It was worth a read.)


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

July 19, 2016 10:32 AM

What Is The Function Of School Today?

Painter Adolph Gottlieb was dismissive of art school in the 1950s:

I'd have done what I'm doing now twenty years ago if I hadn't had to go through that crap. What is the function of the art school today? To confuse the student. To make a living for the teacher. The painter can learn from museums -- probably it is the only way he can to learn. All artists have to solve their problems in the context of their own civilization, painting what their time permits them to paint, extending the boundaries a little further.

It isn't much of a stretch to apply this to computer programming in today's world. We can learn so much these days from programs freely available on GitHub and elsewhere on the web. A good teacher can help, but in general is there a better way to learn how to make things than to study existing works and to make our own?

Most importantly, today's programmers-to-be have to solve their problems in the context of their own civilization: today's computing, whether that's mobile or web or Minecraft. University courses have a hard time keeping up with constant change in the programming milieu. Instead, they often rely on general principles that apply across most environments but which seem lifeless in their abstraction.

I hope that, as a teacher, I add some value for the living I receive. Students with interests and questions and goals help keep me and my courses alive. At least I can set a lower bound of not confusing my students any more than necessary.

~~~~

(The passage above is quoted from Conversations with Artists, Selden Rodman's delightful excursion through the art world of the 1950s, chatting with many of the great artists of the era. It's not an academic treatise, but rather more an educated friend chatting with creative friends. I would thank the person who recommended this, but I have forgotten whose tweet or article did.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 13, 2016 11:19 AM

A Student Asks About Pursuing Research Projects

Faculty in my department are seeking students to work on research projects next. I've sent a couple of messages to our student mailing list this week with project details. One of my advisees, a bright guy with a good mind and several interests, sent me a question about applying. His question got to the heart of a concern many students have, so I responded to the entire student list. I thought I'd share the exchange as an open letter to all students out there who are hesitant about pursuing an opportunity.

The student wrote something close to this:

Both professors' projects seem like great opportunities, but I don't feel even remotely qualified for either of them. I imagine many students feel like this. The projects both seem like they'd entail a really advanced set of skills -- especially needing mathematics -- but they also require students with at least two semesters left of school. Should I bother contacting them? I don't want to jump the gun and rule myself out.

Many students "self-select out" -- choose not to pursue an opportunity -- because they don't feel qualified. That's too bad. You would be surprised how often the profs would be able to find a way to include a student who are interested in their work. Sometimes, they work around a skill the student doesn't have by finding a piece of the project he or she can contribute to. More often, though, they help the student begin to learn the skill they need. We learn many things best by doing them.

Time constraints can be a real issue. One semester is not enough time to contribute much to some projects. A grant may run for a year and thus work best with a student who will be around for two or more semesters. Even so, the prof may be able to find a way to include you. They like what they do and like to work with other people who do, too.

My advice is to take a chance. Contact the professor. Stop in to talk with him or her about your interest, your skills, and your constraints. The worst case scenario is that you get to know the professor a little better while finding out that this project is not a good fit for you. Another possible outcome, though, is that you find a connection that leads to something fruitful. You may be surprised!

~~~~

Postcript. One student has stopped in already this morning to thank me for the encouragement and to say that he is going to contact one of the profs. Don't let a little uncertainty stand in the way of pursuing something you like.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 23, 2016 2:29 PM

The Most Important Thing About the "10,000-Hour Rule"

This:

But we see the core message as something else altogether: In pretty much any area of human endeavor, people have a tremendous capacity to improve their performance, as long as they train in the right way. If you practice something for a few hundred hours, you will almost certainly see great improvement ... but you have only scratched the surface. You can keep going and going and going, getting better and better and better. How much you improve is up to you.

... courtesy of Anders Ericsson himself, in a Salon piece adapted from his new book, Peak, (written with Robert Pool). Ericsson himself, author of the oft-cited paper at the source of the rule, which was made famous by Malcolm Gladwell.

I've seen this dynamic play out over the years for many students. They claimed not to be able any good at math or programming, but then they decided to do the work. And they kept getting better. Some ended up with graduate degrees, and most ended up with surprising success in industry.

Looked at from one perspective, the so-called 10,000 Hour Rule is daunting. "Look how far I am from being really good at this..." Many people shrink in the face of this mountain, willing to settle for limits placed on them by their supposed talents. But, as my friend Richard Gabriel often says, talent doesn't determine good you get, only how fast you get good. As I quoted from Art and Fear long ago, "talent is rarely distinguishable, over the long run, from perseverance and lots of hard work".

That's the most important lesson behind Ericsson's research. If we practice right, we will get better and, with even more of the right kind of training, we will keep getting better. Our limits usually lie much farther away than we think.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 20, 2016 3:45 PM

Louis C.K. on Teaching

Louis C.K. on stage

When I read this interview with Louis C.K. last week, the following response spoke to me as a college professor, not as a stand-up comedian:

Can you explain the difference, practically, between the stand-up you were doing at your peak and what you're doing now?

I think I'm a better comedian overall than I was back then, but back then I was better at performing. When you're that greased up onstage, you just have a higher comedy IQ. It's the ability to go on any stage in the country and be perfectly present and able to maneuver the set and have great timing. Some of it is being in physical shape. When you're under pressure or strain, you get dumb, you know? It's why I started working out in boxing gyms, because you watch a guy who's fighting, he's in a terribly arduous moment and he's making intelligent choices. So to me that's when you're 55 minutes deep into your sixth show of the week, in your fifth city of the week. You have to be able to be great right in that moment. You have to be, "You're not going to believe what I'm going to do next." The audience is tired, and you have to have more energy than anyone in the room. You have to be able to control the pace. At my show last night, I was talking to myself a little bit while my mouth was moving delivering material. I was thinking, You're going too fast. Cool it. You have plenty of time and loads ... to say.

It's funny how so many of us, doing so many different things, experience so many of the same ups and downs in our professions. With a few changes to the surface of this story, it sounds like something a college instructor might say ten years on. I've even reached a point where I can talk to myself in an analytical way during a class session or a presentation. I was never as good in 2004 as Louis was, but I feel the same evolution in how I feel about my work in the classroom.

One thing I haven't tried is boxing. (Perhaps that is one of my more intelligent choices.) I have had to make some tough decisions under grueling conditions while running marathons, but those tend to unfold at a slower pace than in the ring. "Everybody has a plan until they get punched in the face."

Like Louis, I'm always trying to get better at teaching in first gear. He is probably more natural than I am in an amped-up state, too. Both are a challenge for me.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 17, 2016 2:55 PM

Education as a Way to Create Better Adults

In this Dr. Dobbs interview, Alan Kay reveals how he became interested in education as a life's work: as a teenager, he read the issue of Life magazine that introduced Margaret Bourke-White's photos from Buchenwald. Kay says:

That probably was the turning point that changed my entire attitude toward life. It was responsible for getting me interested in education. My interest in education is unglamorous. I don't have an enormous desire to help children, but I have an enormous desire to create better adults.

This desire has caused Kay to explore how children think and learn more deeply than most people do. Our greatest desires sometimes lead us down paths we would not otherwise go.

For some reason, Kay's comments on his enduring involvement in education made me think of this passage from a profile of Ludwig Wittgenstein in the Paris Review:

We all struggle to form a self. Great teaching, Wittgenstein reminds us, involves taking this struggle and engaging in it with others; his whole life was one great such struggle. In working with poor children, he wanted to transform himself, and them.

Wittgenstein wanted to create a better adult of himself and so engaged for six years in "the struggle to form a self" with elementary school students. Let's hope that the students in his charge grew into better adults as well. As Kay says later in the same interview, "Education is a double-edged sword. You have to start where people are, but if you stay there, you're not educating."


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 10, 2016 3:09 PM

Critical Thinking Skills Don't Transfer; They Overlap

A few years ago, I was giving a short presentation about one of our new programs to people from the community and campus. One of the prompts for the talk was how this program would contribute to teaching "critical thinking skills" across the curriculum. I made a mild joke about the idea, wondering aloud which programs on campus taught uncritical thinking skills. Only later did I learn that our new provost, who was in the audience, had just announced a major focus on critical thinking. Fortunately, our new provost had a sense of humor.

I don't believe that we can teach critical thinking in any useful way outside the context of a particular discipline. I do believe, though, that we can teach it as a part of any discipline -- not just in the humanities or liberal arts, which in too many people's minds don't include the sciences. Studies show that these skills don't tend to transfer when we move to a different discipline, but I am convinced that people who learn to think deeply in one discipline are better prepared to learn another discipline than someone who is learning deeply for the first time.

In a recent essay for Inside Higher Ed, John Schlueter offers a new analogy for thinking about critical thinking:

When it comes to thinking skills, it would be much more productive if we stop thinking "transfer" and start thinking "overlap". That is, once thinking skills become more explicitly taught, especially in general education classes, both professors and students will notice how thinking in the context of one domain (say, economics) overlaps with the kind of thinking processes at work in another (biology).

The idea of overlap fits nicely with how I think about these skills. Making thinking skills more explicit in our instruction might enable students to notice intersections and differences across the disciplines they study. That awareness may help them to internalize general strategies that are useful across disciplines, for times when they are in unknown waters, and be aware of possible points of failure in their own thinking.

I'm not sure if this analogy is any easier to operationalize or test than the notion of transfer, but it does give me a different way to think about thinking.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 09, 2016 4:05 PM

If You Can't Teach It, You May Not Understand It. Or Else...

In an interesting article about words and concepts, Elisa Gabbert repeats a familiar sentiment about teaching:

... the physicist Richard Feynman reportedly said, after being asked to prepare a freshman lecture on why spin-1/2 particles obey Fermi-Dirac statistics, "I couldn't reduce it to the freshman level. That means we really don't understand it."

When I read this, my inner Sheldon Cooper thought, "With the data at hand, you really can't draw that conclusion. All you can say with absolute certainty is that you don't understand it."

Actually, I empathize deeply Feynman's sentiment, which has been attributed to many famous people and stated in one form or other by many people who have tried to teach a challenging topic to others. Most teachers have had the experience of trying to explain an idea they think they know cold, only to find themselves stumbling over concepts or relationships that seem so obvious in their expert mind. I experience this feeling almost every semester. When I was a young teacher, such experiences disconcerted me. I soon learned that they were opportunities to understand the world better.

But I think that, at a logical level, people sometimes draw an invalid conclusion from statements of the sort Feynman reportedly made. It's certainly true that if we don't really understand a complex subject, then we probably won't be able to reduce it to a level appropriate for first-year students. But even if you do understand it really well, you still may have difficulty explaining it to beginners.

Teaching involves two parties: the teacher and the learner. Effective teaching requires being able to communicate new ideas in a way that connect with what the learner knows and can do.

To be effective teachers, we need two kinds of knowledge:

  • an understanding of the content to be communicated
  • an understanding of how to reach our intended audience

This latter understanding comes at two levels. First, we might know a specific individual well and be able to connect to his or her own personal experiences and knowledge. Second, we might understand a group, such as freshman CS students, based on some common background and maturity level.

Teaching individuals one-on-one can be most effective, but it takes a lot of time and doesn't scale well. As a result, we often find ourselves teaching a group of people all at once or writing for a mass audience. Teaching a class means being able to communicate new ideas to a group of students in a way that prepares most or all of them to learn on their own after they leave the classroom and begin to do their individual work.

Most people who try to teach find out that this is a lot harder than it looks. Over time, we begin to learn what a generic freshman CS student knows and is like. We build up a cache of stories for reaching them in different ways. We encounter pedagogical patterns of effective learning and learn ways to implement them in our teaching. We also begin to learn techniques for working with students individually so that, in our office after class, we can drop down from generic teaching to more intimate, one-on-one instruction.

If you want to find out simultaneously how well your students are understanding what you are teaching and how well you understand what you are teaching, let them ask questions. I am often amazed at the questions students ask, and equally amazed at how hard they can be to answer well. On truly glorious days, I surprise myself (and them!) with an answer or story or example that meets their needs perfectly.

However well I understand a topic, it always takes me time to figure out how to communicate effectively with a new audience. Once I understood that this was natural, it allowed me to take some of the pressure to be perfect off myself and get down to the business of learning how to teach and, often, learning computer science at a deeper level.

So, if we can't reduce some topic to the freshman level, it may well mean that we don't really understand it. But it may also mean that you don't yet understand your audience well enough. Figuring out which is true in a given case is yet another challenge that every teacher faces.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 18, 2016 11:27 AM

Confusion is the Sweat of Learning

That's a great line from Rhett Allain in Telling You the Answer Isn't the Answer:

Students are under the impression that when they are stuck and confused, they are doing something wrong. Think of it this way. What if you went to the gym to work out but you didn't get sweaty and you weren't sore or tired? You would probably feel like you really didn't get any exercise. The same is true for learning. Confusion is the sweat of learning.

As I read the article, I was a little concerned that Allain's story comes from a physics course designed for elementary education majors... Shouldn't education majors already know that learning is hard and that the teacher's job isn't simply to give answers? But then I realized how glad I am that these students have a chance to learn this lesson from a teacher who is patient enough to work through both the science and the pedagogy with them.

One of the biggest challenges for a teacher is designing workouts that ride along a thin line: confusing students just enough to stretch them into learning something valuable, without working them so hard that they become disheartened by the confusion and failure. This is hard enough to do when working with students individually. The larger and more diverse a class is, the more the teacher has to start shooting for the median, designing materials that work well enough often enough for most of the students and then doing triage with students on both ends of the curve.

Another is helping students learn to appreciate the confusion, perhaps even relish it. The payoff is worth the work.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 06, 2016 2:25 PM

Important Sentences about Learning Styles

From Learning styles: what does the research say?:

... it seems to me that the important "takeaway" from the research on learning styles is that teachers need to know about learning styles if only to avoid the trap of teaching in the style they believe works best for them. As long as teachers are varying their teaching style, then it is likely that all students will get some experience of being in their comfort zone and some experience of being pushed beyond it. Ultimately, we have to remember that teaching is interesting because our students are so different, but only possible because they are so similar. Of course each of our students is a unique individual, but it is extraordinary how effective well-planned group instruction can be.

Variety is an essential element of great teaching, and one I struggle to design into my courses. Students need both mental comfort and mental challenge. Changing pace every so often allows students to focus in earnest for a while and then consolidate knowledge in slower moments. Finally, changing format occasionally is one way to keep things interesting, and when students are interested, they are more willing to work. And that's where most learning comes from: practice and reflection.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 03, 2016 4:19 PM

Pair Programming is a Skill to be Learned

David Andersen offers a rather thorough list of ways that academics might adapt Google's coding practices to their research. It's a good read; check it out! I did want to comment on one small comment, because it relates to a common belief about pair programming:

But, of course, effective pair programming requires a lot of soft skills and compatible pairs. I'm not going to pretend that this solution works everywhere.

I don't pretend that pair programming works everywhere, either, or that everyone should adopt it, but I often wonder about statements like this one. Andersen seems to think that pair programming is a good thing and has helped him and members of his team's to produce high-quality code in the past. Why downplay the practice in a way he doesn't downplay other practices he recommends?

Throughout, the article encourages the use of new tools and techniques. These tools will alter the practice of his students. Some are complex enough that they will need to be taught, and practiced over some length of time, before they become an effective part of the team's workflow. To me, pair programming is another tool to be learned and practiced. It's certainly no harder than learning git...

Pair programming is a big cultural change for many programmers, and so it does require some coaching and extended practice. This isn't much different than the sort of "onboarding" that Andersen acknowledges will be necessary if he is to adopt some of Google's practices successfully in his lab upon upon his return. Pair programming takes practice and time, too, like most new skills.

I have seen the benefit of pair programming in an academic setting myself. Back when I used to teach our introductory course to freshmen, I had students pair every week in the closed lab sessions. We had thirty students in each section, but only fifteen computers in our lab. I paired students in a rotating fashion, so that over the course of fifteen weeks each student programmed with fifteen different classmates. We didn't use a full-on "pure" XP-style of pairing, but what we did was consistent with the way XP encourages pair programming.

This was a good first step for students. They got to know each other well and learned from one another. The better students often helped their partners in the way senior developers can help junior developers progress. In almost all cases, students helped each other find and fix errors. Even though later courses in the curriculum did not follow up with more pair programming, I saw benefits in later courses, in the way students interacted in the lab and in class.

I taught intro again a couple of falls ago after a long time away. Our lab has twenty-eight machines now, so I was not forced to use pair programming in my labs. I got lazy and let them work solo, with cross-talk. In the end, I regretted it. The students relied on me a lot more to help them find errors in their code, and they tended to work in the same insulated cliques throughout the semester. I don't think the group progressed as much as programmers, either, even though some individuals turned out fine.

A first-year intro lab is a very different setting than a research lab full of doctoral students. However, if freshmen can learn to pair program, I think grad students can, too.

Pair programming is more of a social change than a technical change, and that creates different issues than, say, automated testing and style checking. But it's not so different from the kind of change that capricious adherence to style guidelines or other kinds of code review impose on our individuality.

Are we computer scientists so maladjusted socially that we can't -- or can't be bothered -- to learn the new behaviors we need to program successfully in pairs? In my experience, no.

Like Andersen, I'm not advocating that anyone require pair programming in a lab. But: If you think that the benefits of pair programming exceed the cost, then I encourage you to consider having your research students or even your undergrads use it. Don't shy away because someone else thinks it can't work. Why deprive your students of the benefits?

The bottom line is this. Pair programming is a skill to be learned, like many others we teach our students.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 30, 2016 11:02 AM

Thinking About Untapped Extra Credit Opportunities

I don't usually offer extra credit assignments in my courses, but I did this semester. Students submitted their last scheduled homework early in the last week of classes: an interpreter for a small arithmetic language with local variables. There wasn't enough time left to assign another homework, but I had plenty of ideas for the next version of our interpreter. Students had just learned how to implement both functions and mutable data, so they could add functions, true variables, and sequences of expressions to the language. They could extend their REPL to support top-level definitions and state. They could add primitive data objects to the language, or boolean values and operators. If they added, booleans, they could add an if expression. If they were willing to learn some new Racket, they could load programs from a text file and evaluate them. I had plenty of ideas for them to try!

So I asked, "If I write up an optional assignment, how many of you would take a stab at some extra credit?" Approximately twenty of the twenty-five students present raised their hands.

I just downloaded the submission folder. The actual number of attempts was a complement of the number of hands raised: five. And, with only one exception, the students who tried the extra credit problems are the students who need it least. They are likely to get an A or a high B in the course anyway. The students who most need extra credit (and the extra practice) didn't attempt the extra credit.

SMH, as the kids say these days. But the more I thought about it, the more this made sense.

  • College students are the most optimistic people I have ever met. Many students probably raised their hand with every intention of submitting extra credit solutions. Then the reality of the last week of classes hit, and they ran out of time.

  • The students who submitted new code are likely the ones doing well enough in their other courses to be able to spare extra time for this course. They are also probably used to doing well in all of their courses, and a little uncertainty about their grade in this course may have spurred them into action. So, they may have had both more time and more internal motivation to do extra work.

  • To be fair, I don't know that the other students didn't attempt to solve some of the problems. Maybe some tried but did not submit their work. They may not have been satisfied with the quality of their code and didn't have time to seek help from me before the deadline.

  • And, if we are being honest, there is at least one more possibility. A few of the students who find themselves needing extra credit at the end of of the semester got themselves into that position by not being very disciplined in their work habits. Those students may be as optimistic as any other students, but they aren't likely to conjure up better work habits on short notice.

These reflections have me thinking... If I want to help the students who most need the help, I need to find ways to reach them sooner. I did try one thing earlier this semester that worked well for many students: the opportunity to rewrite one of their exams at home with access to all the course materials. A couple of students wrote surprisingly candid and insightful assessments of why they had performed below par under test conditions and supplied better work on their second attempt. I hope that experience helps them as they prepare for the final exam.

I've been teaching a long time. I still have much to learn.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 29, 2016 3:30 PM

A Personal Pantheon of Programming Books

Michael Fogus, in the latest issue of Read-Eval-Print-λove, writes:

The book in question was Thinking Forth by Leo Brodie (Brodie 1987) and upon reading it I immediately put it into my own "personal pantheon" of influential programming books (along with SICP, AMOP, Object-Oriented Software Construction, Smalltalk Best Practice Patterns, and Programmers Guide to the 1802).

Mr. Fogus has good taste. Programmers Guide to the 1802 is new to me. I guess I need to read it.

The other five books, though, are in my own pantheon influential programming books. Some readers may be unfamiliar with these books or the acronyms, or aware that so many of them are available free online. Here are a few links and details:

  • Thinking Forth teaches us how to program in Forth, a concatenative language in which programs run against a global stack. As Fogus writes, though, Brodie teaches us so much more. He teaches a way to think about programs.

  • SICP is Structure and Interpretation of Computer Programs, hailed by many as the greatest book on computer programming ever written. I am sympathetic to this claim.

  • AMOP is The Art of the Metaobject Protocol, a gem of a book that far too few programmers know about. It presents a very different and more general kind of OOP than most people learn, the kind possible in a language like Common Lisp. I don't know of an authorized online version of this book, but there is an HTML copy available.

  • Object-Oriented Software Construction is Bertrand Meyer's opus on OOP. It did not affect me as deeply as the other books on this list, but it presents the most complete, internally consistent software engineering philosophy of OOP that I know of. Again, there seems to be an unauthorized version online.

  • I love Smalltalk Best Practice Patterns and have mentioned it a couple of times over the years [ 1 | 2 ]. Ounce for ounce, it contains more practical wisdom for programming in the trenches than any book I've read. Don't let "Smalltalk" in the title fool you; this book will help you become a better programmer in almost any language and any style. I have a PDF of a pre-production draft of SBPP, and Stephane Ducasse has posted a free online copy, with Kent's blessing.

Paradigms of Artificial Intelligence Programming

There is one book on my own list that Fogus did not mention: Paradigms of Artificial Intelligence Programming, by Peter Norvig. It holds perhaps the top position in my personal pantheon. Subtitled "Case Studies in Common Lisp", this book teaches Common Lisp, AI programming, software engineering, and a host of other topics in a classical case studies fashion. When you finish working through this book, you are not only a better programmer; you also have working versions of a dozen classic AI programs and a couple of language interpreters.

Reading Fogus's paragraph of λove for Thinking Forth brought to mind how I felt when I discovered PAIP as a young assistant professor. I once wrote a short blog entry praising it. May these paragraphs stand as a greater testimony of my affection.

I've learned a lot from other books over the years, both books that would fit well on this list (in particular, A Programming Language by Kenneth Iverson) and others that belong on a different list (say, Gödel, Escher, Bach -- an almost incomparable book). But I treasure certain programming books in a very personal way.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Software Development, Teaching and Learning

April 05, 2016 4:06 PM

Umberto Eco and the Ineffable Power of Books

In What Unread Books Can Teach Us Oliver Burkeman relates this story about novelist and scholar Umberto Eco:

While researching his own PhD, Eco recalls, he got deeply stuck, and one day happened to buy a book by an obscure 19th-century abbot, mainly because he liked the binding. Idly paging through it, he found, in a throwaway line, a stunning idea that led him to a breakthrough. Who'd have predicted it? Except that, years later, when a friend asked to see the passage in question, he climbed a ladder to a high bookshelf, located the book... and the line wasn't there. Stimulated by the abbot's words, it seems, he'd come up with it himself. You never know where good ideas will come from, even when they come from you.

A person can learn something from a book he or or she has read, even if the book doesn't contain what the person learned. This is a much steadier path to knowledge than resting in the comfort that all information is available at the touch of a search engine.

A person's anti-library helps to make manifest what one does not yet know. As Eco reminds us, humility is an essential ingredient in this prescription.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

March 31, 2016 2:00 PM

TFW Your Students Get Abstraction

A colleague sent me the following exchange from his class, with the tag line "Best comments of the day." His students were working in groups to design a Java program for Conway's Game of Life.

Student 1: I can't comprehend what you are saying.

Student 2: The board doesn't have to be rectangular, does it?

Instructor: In Conway's design, it was. But abstractly, no.

Student 3: So you could have a board of different shapes, or you could even have a three-dimensional "board". Each cell knows its neighbors even if we can't easily display it to the user.

Instructor: Sure, "neighbor" is an abstract concept that you can implement differently depending on your need.

Student 2: I knew there was a reason I took linear algebra.

Student 1: Ok. So let's only allow rectangular boards.

Maybe Student 1 still can't comprehend what everyone is saying... or perhaps he or she understands perfectly well and is a pragmatist. YAGNI for the win!

It always makes me happy when a student encounters a situation in which linear algebra is useful and recognizes its applicability unprompted.

I salute all three of these students, and the instructor who is teaching the class. A good day.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 30, 2016 3:21 PM

Quick Hits at the University

This morning I read three pieces with some connection to universities and learning. Each had a one passage that made me smart off silently as I pedaled.

From The Humanities: What's The Big Idea?:

Boyarin describes his own research as not merely interdisciplinary but "deeply post-disciplinary." (He jokes that when he first came to Berkeley, his dream was to be 5 percent in 20 departments.)

Good luck getting tenure that way, dude.

"Deeply post-disciplinary" is a great bit of new academic jargon. Universities are very much organized by discipline. Figuring out how to support scholars who work outside the lines is a perpetual challenge, one that we really should address at scale if we want to enable universities to evolve.

From this article on Bernie Sanders's free college plan:

Big-picture principles are important, but implementation is important, too.

Hey, maybe he just needs a programmer.

Implementing big abstractions is hard enough when the substance is technical. When you throw in social systems and politics, implementing any idea that deviates very far from standard practice becomes almost impossible. Big Ball of Mud, indeed.

From Yours, Isaac Asimov: A Life in Letters:

Being taught is the intellectual analog of being loved.

I'll remind my students of this tomorrow when I give them Exam 3, on syntactic abstraction. "I just called to say 'I love you'."

Asimov is right. When I think back on all my years in school, I feel great affection for so many of my teachers, and I recall feeling their affection for me. Knowledge is not only power, says Asimov; it is happiness. When people help me learn they offer me knew ways to be happy.

( The Foundation Trilogy makes me happy, too.)


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

March 18, 2016 9:58 AM

Thoughts for Programmers from "Stay on the Bus"

Somehow, I recently came across a link to Stay on the Bus, an excerpt from a commencement speech Arno Rafael Minkkinen gave at the New England School of Photography in June 2004. It is also titled "The Helsinki Bus Station Theory: Finding Your Own Vision in Photography". I almost always enjoy reading the thoughts of an artist on his or her own art, and this was no exception. I also usually hear echoes of what I feel about my own arts and avocations. Here are three.

What happens inside your mind can happen inside a camera.

This is one of the great things about any art. What happens inside your mind can happen in a piano, on a canvas, or in a poem. When people find the art that channels their mind best, beautiful things -- and lives -- can happen.

One of the things I like about programming is that is really a meta-art. Whatever happens in your mind can happen inside a camera, inside a piano, on a canvas, or in a poem. Yet whatever happens inside a camera, inside a piano, or on a canvas can happen inside a computer, in the form of a program. Computing is a new medium for experimenting, simulating, and creating.

Teachers who say, "Oh, it's just student work," should maybe think twice about teaching.

Amen. There is no such thing as student work. It's the work our students are ready to make at a particular moment in time. My students are thinking interesting thoughts and doing their best to make something that reflects those thoughts. Each program, each course in their study is a step along a path.

All work is student work. It's just that some of us students are at a different point along our paths.

Georges Braque has said that out of limited means, new forms emerge. I say, we find out what we will do by knowing what we will not do.

This made me think of an entry I wrote many years ago, Patterns as a Source of Freedom. Artists understand better than programmers sometimes that subordinating to a form does not limit creativity; it unleashes it. I love Minkkinen's way of saying this: we find out what we will do by knowing what we will not do. In programming as in art, it is important to ask oneself, "What will I not do?" This is how we discover we will do, what we can do, and even what we must do.

Those are a few of the ideas that stood out to me as I read Minkkinen's address. The Helsinki Bus Station Theory is a useful story, too.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 11, 2016 3:59 PM

The Irony of "I Didn't Have Time to Use Your Technique..."

All last spring, I planned to blog about my Programming Languages class but never got around to writing more than a couple of passing thoughts. I figured this spring would be different, yet here we are at the end of Week 9 and I've not even mentioned the course. This is how guys like me fail in personal relationships. We think a lot of good thoughts but don't follow through with our time and attention.

I am tempted to say that this has been a seesaw semester, but on reflection things have gone pretty well. I have a good group of students, most of whom are engaged in the topic and willing to interact in class. (Lack of interest and engagement was a bit of a problem last spring.) We've had fun in and out of class.

The reason it sometimes feels like I'm on a seesaw is that the down moments stand out more in my memory than they deserve to. You'd think that, as long as I've been teaching, I would have figured out how to manage this dynamic more effectively. Maybe it's just a part of being a teacher. We want things to go perfectly for our students, and when they don't we have to wonder why not.

One source of concern this semester has been the average exam score for the first two tests. They have been lower than historic averages in the course. Have I changed my expectations? Have I done something differently in the classroom? After taking a hard look back on my previous semesters' notes and assignments, I think not. My job now is to help this class reach the level I think they can reach. What can I do differently going forward? What can they do differently, and how do I help them do it?

I know they are capable of growth. Early last month, PhD Comics ran a strip titled The Five Most Typed Words in Academia. The winner was "Sorry for the late reply". At the time, the five most common words my students had said to me in the young semester were:

I didn't read the instructions.

For example, the student would say, "This problem was hard because I didn't know how to take the remainder properly." Me: "I gave that information in the instructions for the assignment." Student: "Oh, I didn't read the instructions."

Fortunately, we seem to moved beyond that stage of our relationship, as most students have come to see that the assignment may actually include some clues to help them out. Or maybe they've just stopped telling me that they don't read the instructions. If so, I appreciate them sparing my feelings.

A related problem is a perennial issue in this course: We learn a new technique, and some students choose not to use it. Writing code to process language expressions is a big part of the course, so we study some structural recursion patterns for processing a grammar specified in BNF. Yet a few students insist on whacking away at the problem with nothing more than a cond expression and a brave heart. When I ask them why, they sometimes say:

I didn't have time to use the technique we learned in class, so...

... so they spent twice as long trying to find their way to a working solution. Even when they find one, the code is generally unreadable even to them. Functional programming is hard, they say.

Fortunately, again, many seem to moved beyond this stage and are now listening to their data structures. The result is beautiful code: short, clear, and expressive. Grading such programs is a pleasure.

It recently occurred to me, though, that I have been guilty of the "I didn't have time to use your technique..." error myself. While trying to improve the recursive programming unit of the course over the last few years, I seem to have invented my own version of the Design Recipe from How to Design Programs. In the spirit of Greenspun's Tenth Rule, I have probably reinvented an ad hoc, informally-specified, bug-ridden, pedagogically less sound version of the Design Recipe.

As we used to say in Usenet newsgroups, "Pot. Kettle, Black." Before the next offering of the course, I intend to do my homework properly and find a way to integrate HtDP's well-tested technique into my approach.

These things stand out in my mind, yet I think that the course is going pretty well. And just when I begin to wonder whether I've been letting the class down, a few students stop in my office and say that this is one of their favorite courses CS courses ever. They are looking forward to the compiler course this fall. I'm not sure students realize the effect such words have on their instructors -- at least on this one.

Off to spring break we go. When we get back: lexical addressing and (eventually) a look at stateful programming. Fun and surprise await us all.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 02, 2016 4:45 PM

Why Bother With Specialty Languages?

In Sledgehammers vs Nut Crackers, Thomas Guest talks about pulling awk of the shelf to solve a fun little string-processing problem. He then shows a solution in Python, one of his preferred general-purpose programming languages. Why bother with languages like awk when you have Python at the ready? Guest writes:

At the outset of this post I admitted I don't generally bother with awk. Sometimes, though, I encounter the language and need to read and possibly adapt an existing script. So that's one reason to bother. Another reason is that it's elegant and compact. Studying its operation and motivation may help us compose and factor our own programs -- programs far more substantial than the scripts presented here, and in which there will surely be places for mini-languages of our own.

As I watch some of my students struggle this semester to master Racket, recursive programming, and functional style, I offer them hope that learning a new language and a new style will make them better Python and Java programmers, even if they never write another Racket or Lisp program again. The more different ways we know how to think about problems and solutions, the more effective we can be as solvers of problems. Of course, Racket isn't a special purpose language, and a few students find they so like the new style that they stick with the language as their preferred tool.

Experienced programmers understand what Guest is saying, but in the trenches of learning, it can be hard to appreciate the value of knowing different languages and being able to think in different ways. My sledgehammer works fine, my students say; why am I learning to use a nutcracker? I have felt that sensation myself.

I try to keep this feeling in mind as my students work hard to master a new way of thinking. This helps me empathize with their struggle, all the while knowing that Racket will shape how some of them think about every program they write in the future.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 14, 2016 11:28 AM

Be Patient, But Expect Better. Then Make Better.

In Reversing the Tide of Declining Expectations Matthew Butterick exhorts designers to expect more from themselves, as well as from the tools they use. When people expect more, other people sometimes tell them to be patient. There is a problem with being patient:

[P]atience is just another word for "let's make it someone else's problem. ... Expectations count too. If you have patience, and no expectations, you get nothing.

But what if you find the available tools lacking and want something better?

Scientists often face this situation. My physicist friends seem always to be rigging up some new apparatus in order to run the experiments they want to run. For scientists and so many other people these days, though, if they want a new kind of tool, they have to write a computer program.

Butterick tells a story that shows designers can do the same:

Let's talk about type-design tools. If you've been at the conference [TYPO Berlin, 2012], maybe you saw Petr van Blokland and Frederick Berlaen talking about RoboFont yesterday. But that is the endpoint of a process that started about 15 years ago when Erik and Petr van Blokland, and Just van Rossum (later joined by many others) were dissatisfied with the commercial type-design tools. So they started building their own. And now, that's a whole ecosystem of software that includes code libraries, a new font-data format called UFO, and applications. And these are not hobbyist applications. These are serious pieces of software being used by professional type designers.

What makes all of this work so remarkable is that there are no professional software engineers here. There's no corporation behind it all. It's a group of type designers who saw what they needed, so they built it. They didn't rely on patience. They didn't wait for someone else to fix their problems. They relied on their expectations. The available tools weren't good enough. So they made better.

That is fifteen years of patience. But it is also patience and expectation in action.

To my mind, this is the real goal of teaching more people how to program: programmers don't have to settle. Authors and web designers create beautiful, functional works. They shouldn't have to settle for boring or cliché type on the web, in their ebooks, or anywhere else. They can make better. Butterick illustrates this approach to design himself with Pollen, his software for writing and publishing books. Pollen is a testimonial to the power of programming for authors (as well as a public tribute to the expressiveness of a programming language).

Empowering professionals to make better tools is a first step, but it isn't enough. Until programming as a skill becomes part of the culture of a discipline, better tools will not always be used to their full potential. Butterick gives an example:

... I was speaking to a recent design-school graduate. He said, "Hey, I design fonts." And I said, "Cool. What are you doing with RoboFab and UFO and Python?" And he said, "Well, I'm not really into programming." That strikes me as a really bad attitude for a recent graduate. Because if type designers won't use the tools that are out there and available, type design can't make any progress. It's as if we've built this great spaceship, but none of the astronauts want to go to Mars. "Well, Mars is cool, but I don't want to drive a spaceship. I like the helmet, though." Don't be that guy. Go the hell to Mars.

Don't be that person. Go to Mars. While you are at it, help the people you know to see how much fun programming can be and, more importantly, how it can help them make things better. They can expect more.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 24, 2016 10:33 AM

Learn Humility By Teaching

In The Books in My Life, Henry Miller writes about discussing books with an inquisitive friend:

I remember this short period vividly because it was an exercise in humility and self-control on my part. The desire to be absolutely truthful with my friend caused me to realize how very little I knew, how very little I could reveal, though he always maintained that I was a guide and mentor to him. In the brief, the result of those communions was that I began to doubt all that I had blithely taken for granted. The more I endeavored to explain my point of view, the more I floundered. He may have thought I acquitted myself well, but not I. Often, on parting from him, I would continue the inner debate interminably.

I am guessing that most anyone who teaches knows the feeling Miller describes. I feel it all the time.

I'm feeling it again this semester while teaching my Programming Languages and Paradigms course. We're learning Racket as a way to learn to talk about programming languages, and also as a vehicle for learning functional programming. One of my goals this semester is to be more honest. Whenever I find a claim in my lecture notes that sounds like dogma that I'm asking students to accept on faith, I'm trying to explain in a way that connects to their experience. Whenever students ask a question about why we do something in a particular way, I'm trying to help them really to see how the new way is an improvement over what they are used to. If I can't, I admit that it's convention and resolve not to be dogmatic about it with them.

This is a challenge for me. I am prone to dogma, and having programmed functionally in Scheme for a long time, so much of what my students experience learning Racket is deeply compiled in my brain. Why do we do that? I've forgotten, if I ever knew. I may have a vague memory that, when I don't do it that way, chaos ensues. Trust me! Unfortunately, that is not a convincing way to teach. Trying to give better answers and more constructive explanations gives rise to the sort of floundering that Miller talks about. After class, the inner debate continues as I try to figure out what I know and why, so that I can do better.

Some natural teachers may find this easy, but for me, learning to answer questions in a way that really helps students has been a decades-long lesson in humility.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 08, 2015 3:55 PM

A Programming Digression: Generating Excellent Numbers

Background

Whenever I teach my compiler course, it seems as if I run across a fun problem or two to implement in our source language. I'm not sure if that's because I'm looking or because I'm just lucky to read interesting blogs and Twitter feeds.

Farey sequences as Ford circles

For example, during a previous offering, I read on John Cook's blog about Farey's algorithm for approximating real numbers with rational numbers. This was a perfect fit for the sort of small language that my students were writing a compiler for, so I took a stab at implementing it. Because our source language, Klein, was akin to an integer assembly language, I had to unravel the algorithm's loops and assignment statements into function calls and if statements. The result was a program that computed an interesting result and that tested my students' compilers in a meaningful way. The fact that I had great fun writing it was a bonus.

This Semester's Problem

Early this semester, I came across the concept of excellent numbers. A number m is "excellent" if, when you split the sequence of its digits into two halves, a and b, b² - a² equals n. 48 is the only two-digit excellent number (8² - 4² = 48), and 3468 is the only four-digit excellent number (68² - 34² = 3468). Working with excellent numbers requires only integers and arithmetic operations, which makes them a perfect domain for our programming language.

My first encounter with excellent numbers was Brian Foy's Computing Excellent Numbers, which discusses ways to generate numbers of this form efficiently in Perl. Foy uses some analysis by Mark Jason Dominus, written up in An Ounce of Theory Is Worth a Pound of Search, that drastically reduces the search space for candidate a's and b's. A commenter on the Programming Praxis article uses the same trick to write a short Python program to solve that challenge. Here is an adaptation of that program which prints all of the 10-digit excellent numbers:

    for a in range(10000, 100000):
        b = ((4*a**2+400000*a+1)**0.5+1) / 2.0
        if b == int(b):
           print( int(str(a)+str(int(b))) )

I can't rely on strings or real numbers to implement this in Klein, but I could see some alternatives... Challenge accepted!

My Standard Technique

We do not yet have a working Klein compiler in class yet, so I prefer not to write complex programs directly in the language. It's too hard to get subtle semantic issues correct without being able to execute the code. What I usually do is this:

  • Write a solution in Python.
  • Debug it until it is correct.
  • Slowly refactor the program until it uses only a Klein-like subset of Python.

This produces what I hope is a semantically correct program, using only primitives available in Klein.

Finally, I translate the Python program into Klein and run it through my students' Klein front-ends. This parses the code to ensure that it is syntactically correct and type-checks the code to ensure that it satisfies Klein's type system. (Manifest types is the one feature Klein has that Python does not.)

As mentioned above, Klein is something like integer assembly language, so converting to a Klein-like subset of Python means giving up a lot of features. For example, I have to linearize each loop into a sequence of one or more function calls, recursing at some point back to the function that kicks off the loop. You can see this at play in my Farey's algorithm code from before.

I also have to eliminate all data types other than booleans and integers. For the program to generate excellent numbers, the most glaring hole is a lack of real numbers. The algorithm shown above depends on taking a square root, getting a real-valued result, and then coercing a real to an integer. What can I do instead?

the iterative step in Newton's method

Not to worry. sqrt is not a primitive operator in Klein, but we have a library function. My students and I implement useful utility functions whenever we encounter the need and add them to a file of definitions that we share. We then copy these utilities into our programs as needed.

sqrt was one of the first complex utilities we implemented, years ago. It uses Newton's method to find the roots of an integer. For perfect squares, it returns the argument's true square root. For all other integers, it returns the largest integer less than or equal to the true root.

With this answer in hand, we can change the Python code that checks whether a purported square root b is an integer using type coercion:

    b == int(b)
into Klein code that checks whether the square of a square root equals the original number:
    isSquareRoot(r : integer, n : integer) : boolean
      n = r*r

(Klein is a pure functional language, so the return statement is implicit in the body of every function. Also, without assignment statements, Klein can use = as a boolean operator.)

Generating Excellent Numbers in Klein

I now have all the Klein tools I need to generate excellent numbers of any given length. Next, I needed to generalize the formula at the heart of the Python program to work for lengths other than 10.

For any given desired length, let n = length/2. We can write any excellent number m in two ways:

  • a10n + b (which defines it as the concatenation of its front and back halves)
  • b² - a² (which defines it as excellent)

If we set the two m's equal to one another and solve for b, we get:

        1
    b = -(1 + sqrt[4a2 + 4(10n)a + 1])
        2

Now, as in the algorithm above, we loop through all values for a with n digits and find the corresponding value for b. If b is an integer, we check to see if m = ab is excellent.

The Python loop shown above works plenty fast, but Klein doesn't have loops. So I refactored the program into one that uses recursion. This program is slower, but it works fine for numbers up to length 6:

    > python3.4 generateExcellent.py 6
    140400
    190476
    216513
    300625
    334668
    416768
    484848
    530901

Unfortunately, this version blows out the Python call stack for length 8. I set the recursion limit to 50,000, which helps for a while...

    > python3.4 generateExcellent.py 8
    16604400
    33346668
    59809776
    Segmentation fault: 11

Cool.

Next Step: See Spot Run

The port to an equivalent Klein program was straightforward. My first version had a few small bugs, which my students' parsers and type checkers helped me iron out. Now I await their full compilers, due at the end of the week, to see it run. I wonder how far we will be able to go in the Klein run-time system, which sits on top of a simple virtual machine.

If nothing else, this program will repay any effort my students make to implement the proper handling of tail calls! That will be worth a little extra-credit...

This programming digression has taken me several hours spread out over the last few weeks. It's been great fun! The purpose of Klein is to help my students learn to write a compiler. But the programmer in me has fun working at this level, trying to find ways to implement challenging algorithms and then refactoring them to run deeper or faster. I'll let you know the results soon.

I'm either a programmer or crazy. Probably both.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 20, 2015 6:02 PM

The Scientific Value of Reading Old Texts

In Numbers, Toys and Music, the editors of Plus Magazine interview Manjul Bhargava, who won a 2014 Fields Medal for his work on a problem involving a certain class of square numbers.

Bhargava talked about getting his start on problems of this sort not by studying Gauss's work from nineteenth century, but by reading the work of the seventh century mathematician Brahmagupta in the original Sanskrit. He said it was exciting to read original texts and old translations of original texts from at least two perspectives. Historically, you see an idea as it is encountered and discovered. It's an adventure story. Mathematically, you see the idea as it was when it was discovered, before it has been reinterpreted over many years by more modern mathematicians, using newer, fancier, and often more complicated jargon than was available to the original solver of the problem.

He thinks this is an important step in making a problem your own:

So by going back to the original you can bypass the way of thinking that history has somehow decided to take, and by forgetting about that you can then take your own path. Sometimes you get too influenced by the way people have thought about something for 200 years, that if you learn it that way that's the only way you know how to think. If you go back to the beginning, forget all that new stuff that happened, go back to the beginning. Think about it in a totally new way and develop your own path.

Bhargava isn't saying that we can ignore the history of math since ancient times. In his Fields-winning work, he drew heavily on ideas about hyperelliptic curves that were developed over the last century, as well as computational techniques unavailable to his forebears. He was prepared with experience and deep knowledge. But by going back to Brahmagupta's work, he learned to think about the problem in simpler terms, unconstrained by the accumulated expectations of modern mathematics. Starting from a simpler set of ideas, he was able to make the problem his own and find his own way toward a solution.

This is good advice in computing as well. When CS researchers tell us to read the work of McCarthy, Newell and Simon, Sutherland, and Engelbart, they are channeling the same wisdom that helped Manjul Bhargava discover new truths about the structure of square numbers.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 30, 2015 4:35 PM

Taking Courses Broad and Wide

Nearly nine years ago, digital strategist Russell Davies visited the University of Oregon to work with students and faculty in the advertising program and wrote a blog entry about his stint there. Among his reflections on what the students should be doing and learning, he wrote:

We're heading for a multi-disciplinary world and that butts right up against a university business model. If I were preparing myself for my job right now I'd do classes in film editing, poetry, statistics, anthropology, business administration, copyright law, psychology, drama, the history of art, design, coffee appreciation, and a thousand other things. Colleges don't want you doing that, that destroys all their efficiencies, but it's what they're going to have to work out.

I give similar advice to prospective students of computer science: If they intend to take their CS degrees out into the world and make things for people, they will want to know a little bit about many different things. To maximize the possibilities of their careers, they need a strong foundation in CS and an understanding of all the things that shape how software and software-enhanced gadgets are imagined, made, marketed, sold, and used.

Just this morning, a parent of a visiting high school student said, after hearing about all the computer science that students learn in our programs, "So, our son should probably drop his plans to minor in Spanish?" They got a lot more than a "no" out of me. I talked about the opportunities to engage with the growing population of Spanish-speaking Americans, even here in Iowa; the opportunities available to work for companies with international divisions; and how learning a foreign language can help students study and learn programming languages differently. I was even able to throw in a bit about grammars and the role they play in my compiler course this semester.

I think the student will continue with his dream to study Spanish.

I don't think that the omnivorous course of study that Davies outlines is at odds with the "efficiencies" of a university at all. It fits pretty well with a liberal arts education, which even of our B.S. students have time for. But it does call for some thinking ahead, planning to take courses from across campus that aren't already on some department's list of requirements. A good advisor can help with that.

I'm guessing that computer science students and "creatives" are not the only ones who will benefit from seeking a multi-disciplinary education these days. Davies is right. All university graduates will live in a multi-disciplinary world. It's okay for them (and their parents) to be thinking about careers when they are in school. But they should prepare for a world in which general knowledge and competencies buoy up their disciplinary knowledge and help them adapt over time.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

September 11, 2015 3:55 PM

Search, Abstractions, and Big Epistemological Questions

Andy Soltis is an American grandmaster who writes a monthly column for Chess Life called "Chess to Enjoy". He has also written several good books, both recreational and educational. In his August 2015 column, Soltis talks about a couple of odd ways in which computers interact with humans in the chess world, ways that raise bigger questions about teaching and the nature of knowledge.

As most people know, computer programs -- even commodity programs one can buy at the store -- now play chess better than the best human players. Less than twenty years ago, Deep Blue first defeated world champion Garry Kasparov in a single game. A year later, Deep Blue defeated Kasparov in a closely contested six-game match. By 2005, computers were crushing Top Ten players with regularity. These days, world champion Magnus Larson is no match for his chess computer.

a position in which humans see the win, but computers don't

Yet there are still moments where humans shine through. Soltis opens with a story in which two GMs were playing a game the computers thought Black was winning, when suddenly Black resigned. Surprised journalists asked the winner, GM Vassily Ivanchuk, what had happened. It was easy, he said: it only looked like Black was winning. Well beyond the computers' search limits, it was White that had a textbook win.

How could the human players see this? Were they searching deeper than the computers? No. They understood the position at a higher level, using abstractions such as "being in the square" and passed pawns like splitting a King like "pants". (We chessplayers are an odd lot.)

When you can define 'flexibility' in 12 bits,
it will go into the program.

Attempts to program computers to play chess using such abstract ideas did not work all that well. Concepts like king safety and piece activity proved difficult to implement in code, but eventually found their way into the programs. More abstract concepts like "flexibility", "initiative", and "harmony" have proven all but impossible to implement. Chess programs got better -- quickly -- when two things happened: (1) programmers began to focus on search, implementing metrics that could be applied rapidly to millions of positions, and (2) computer chips got much, much faster.

Pawn Structure Chess, by Andy Soltis

The result is that chess programs can beat us by seeing farther down the tree of possibilities than we do. They make moves that surprise us, puzzle us, and even offend our sense of beauty: "Fischer or Tal would have played this move; it is much more elegant." But they win, easily -- except when they don't. Then we explain why, using ideas that express an understanding of the game that even the best chessplaying computers don't seem to have.

This points out one of the odd ways computers relate to us in the world of chess. Chess computers crush us all, including grandmasters, using moves we wouldn't make and many of us do not understand. But good chessplayers do understand why moves are good or bad, once they figure it out. As Soltis says:

And we can put the explanation in words. This is why chess teaching is changing in the computer age. A good coach has to be a good translator. His students can get their machine to tell them the best move in any position, but they need words to make sense of it.

Teaching computer science at the university is affected by a similar phenomenon. My students can find on the web code samples to solve any problem they have, but they don't always understand them. This problem existed in the age of the book, too, but the web makes available so much material, often undifferentiated and unexplained, so, so quickly.

The inverse of computers making good moves we don't understand brings with it another oddity, one that plays to a different side of our egos. When a chess computer loses -- gasp! -- or fails to understand why a human-selected move is better than the moves it recommends, we explain it using words that make sense of human move. These are, of course, the same words and concepts that fail us most of the time when we are looking for a move to beat the infernal machine. Confirmation bias lives on.

Soltis doesn't stop here, though. He realizes that this strange split raises a deeper question:

Maybe it's one that only philosophers care about, but I'll ask it anyway:

Are concepts like "flexibility" real? Or are they just artificial constructs, created by and suitable only for feeble, carbon-based minds?

(Philosophers are not the only ones who care. I do. But then, the epistemology course I took in grad school remains one of my two favorite courses ever. The second was cognitive psychology.)

Aristotle

We can implement some of our ideas about chess in programs, and those ideas have helped us create machines we can no longer defeat over the board. But maybe some of our concepts are simply be fictions, "just so" stories we tell ourselves when we feel the need to understand something we really don't. I don't think so, the pragmatist in me keeps pushing for better evidence.

Back when I did research in artificial intelligence, I always chafed at the idea of neural networks. They seemed to be a fine model of how our brains worked at the lowest level, but the results they gave did not satisfy me. I couldn't ask them "why?" and receive an answer at the conceptual level at which we humans seem to live. I could not have a conversation with them in words that helped me understand their solutions, or their failures.

Now we live in a world of "deep learning", in which Google Translate can do a dandy job of translating a foreign phrase for me but never tell me why it is right, or explain the subtleties of choosing one word instead of another. Add more data, and it translates even better. But I still want the sort of explanation that Ivanchuk gave about his win or the sort of story Soltis can tell about why a computer program only drew a game because it saddled itself with inflexible pawn structure.

Perhaps we have reached the limits of my rationality. More likely, though, is that we will keep pushing forward, bringing more human concepts and abstractions within the bounds of what programs can represent, do, and say. Researchers like Douglas Hofstadter continue the search, and I'm glad. There are still plenty of important questions to ask about the nature of knowledge, and computer science is right in the middle of asking and answering them.

~~~~

IMAGE 1. The critical position in Ivanchuk-Jobava, Wijk aan Zee 2015, the game to which Soltis refers in his story. Source: Chess Life, August 2015, Page 17.

IMAGE 2. The cover of Andy Soltis's classic Pawn Structure Chess. Source: the book's page at Amazon.com.

IMAGE 3. A bust of Aristotle, who confronted Plato's ideas about the nature of ideals. Source: Classical Wisdom Weekly.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

August 25, 2015 1:57 PM

The Art of Not Reading

The beginning of a new semester brings with it a crush of new things to read, write, and do, which means it's a good time to remember this advice from Arthur Schopenhauer:

Hence, in regard to our subject, the art of not reading is highly important. This consists in not taking a book into one's hand merely because it is interesting the great public at the time -- such as political or religious pamphlets, novels, poetry, and the like, which make a noise and reach perhaps several editions in their first and last years of existence. Remember rather that the man who writes for fools always finds a large public: and only read for a limited and definite time exclusively the works of great minds, those who surpass other men of all times and countries, and whom the voice of fame points to as such. These alone really educate and instruct.

"The man who writes for fools always finds a large public." You do not have to be part of it. Time is limited. Read something that matters.

The good news for me is that there is a lot of writing about compilers by great minds. This is, of course, also the bad news. Part of my job is to help my students navigate the preponderance of worthwhile readings.

Reading in my role as department head is an altogether different matter...

~~~~

The passage above is from On Books and Reading, which is available via Project Gutenberg, a wonderful source of many great works.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 23, 2015 10:12 AM

Science Students Should Learn How to Program, and Do Research

Physicist, science blogger, and pop science author Chad Orzel offered some advice for prospective science students in a post on his Forbes blog last week. Among other things, he suggests that science students learn to program. Orzel is among many physics profs who integrate computer simulations into their introductory courses, using the Matter and Interactions curriculum (which you may recall reading about here in a post from 2007).

I like the way Orzel explains the approach to his students:

When we start doing programming, I tell students that this matters because there are only about a dozen problems in physics that you can readily solve exactly with pencil and paper, and many of them are not that interesting. And that goes double, maybe triple for engineering, where you can't get away with the simplifying spherical-cow approximations we're so fond of in physics. Any really interesting problem in any technical field is going to require some numerical simulation, and the sooner you learn to do that, the better.

This advice complements Astrachan's Law and its variants, which assert that we should not ask students to write a program if they can do the task by hand. Conversely, if they can't solve their problems by hand, then they should get comfortable writing programs that can. (Actually, that's the contrapositive of Astrachan, but "contrapositively" doesn't sound as good.) Programming is a medium for scientists, just as math is, and it becomes more important as they try to solve more challenging problems.

Orzel and Astrachan both know that the best way to learn to program is to have a problem you need a computer to solve. Curricula such as Matter and Interactions draw on this motivation and integrate computing directly into science courses. This is good news for us in computer science. Some of the students who learn how to program in their science courses find that they like it and want to learn more. We have just the courses they need to go deeper.

I concur with all five of Orzel's suggestions for prospective science students. They apply as well to computer science students as to those interested in the physical sciences. When I meet with prospective CS students and their families, I emphasize especially that students should get involved in research. Here is Orzel's take:

While you might think you love science based on your experience in classes, classwork is a pale imitation of actual science. One of my colleagues at Williams used a phrase that I love, and quote all the time, saying that "the hardest thing to teach new research students is that this is not a three-hour lab."

CS students can get involved in empirical research, but they also have the ability to write their own programs to explore their own ideas and interests. The world of open source software enables them to engage the discipline in ways that preceding generations could only have dreamed of. By doing empirical CS research with a professor or working on substantial programs that have users other than the creators, students can find out what computer science is really about -- and find out what they want to devote their lives to.

As Orzel points out, this is one of the ways in which small colleges are great for science students: undergrads can more readily become involved in research with their professors. This advantage extends to smaller public universities, too. In the past year, we have had undergrads do some challenging work on bioinformatics algorithms, motion virtual manipulatives, and system security. These students are having a qualitatively different learning experience than students who are only taking courses, and it is an experience that is open to all undergrad students in CS and the other sciences here.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 19, 2015 4:07 PM

Working Too Much Means Never Having to Say "No"

Among the reasons David Heinemeier Hansson gives in his advice to Fire the Workaholics is that working too much is a sign of bad judgment:

If all you do is work, your value judgements are unlikely to be sound. Making good calls on "is it worth it?" is absolutely critical to great work. Missing out on life in general to put more hours in at the office screams "misguided values".

I agree, in two ways. First, as DHH says, working too much is itself a general indicator that your judgment is out of whack. Second is the more specific case:

For workaholics, doing more work always looks like a reasonable option. As a result, when you are trying to decide, "Should I make this or not?", you never have to choose not to make the something in question -- even when not making it is the right thing to do. That sort of indifferent decision making can be death in any creative endeavor.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 04, 2015 1:00 PM

Concrete, Then Abstract

One of the things that ten years teaching the same topic has taught Daniel Lemire is that students generally learn more effectively when they learn practical skills first and only then confront the underlying theory:

Though I am probably biased, I find that it is a lot harder to take students from a theoretical understanding to a practical one... than to take someone with practical skills and teach him the theory. My instinct is that most people can more easily acquire an in-depth practical knowledge through practice (since the content is relevant) and they then can build on this knowledge to acquire the theory.

He summarizes the lesson he learned as:

A good example, well understood, is worth a hundred theorems.

My years of teaching have taught me similar lessons. I described a related idea in Examples First, Names Last: showing students examples of an idea before giving it a name.

Lemire's experience teaching XML and my experience teaching a number of topics, including the object-oriented programming example in that blog post, are specific examples of a pattern I usually call Concrete, Then Abstract. I have found this to be an effective strategy in my teaching and writing. I may have picked up the name from Ralph Johnson at ChiliPLoP 2003, where we were part of a hot topic group sketching programming patterns for beginning programmers. Ralph is a big proponent of showing concrete examples before introducing abstract ideas. You can see that in just about every pattern, paper, and book he has written.

My favorite example of "Concrete, Then Abstract" this week is in an old blog entry by Scott Vokes, Making Diagrams with Graphviz. I recently came back to an idea I've had on hold for a while: using Graphviz to generate a diagram showing all of my department's courses and prerequisites. Whenever I return to Graphviz after time away, I bypass its documentation for a while and pull up instead a cached link to Scott's short introduction. I immediately scroll down to this sample program written in Graphviz's language, DOT:

an example program in Graphviz's DOT language

... and the corresponding diagram produced by Graphviz:

an example diagram produced by Graphviz

This example makes me happy, and productive quickly. It demonstrates an assortment of the possibilities available in DOT, including several specific attributes, and shows how they are rendered by Graphviz. With this example as a starting point, I can experiment with variations of my own. If I ever want or need more, I dig deeper and review the grammar of DOT in more detail. By that time, I have a pretty good practical understanding of how the language works, which makes remembering how the grammar works easier.

Sometimes, the abstract idea to learn, or re-learn, is a context-free grammar. Sometimes, it's a rule for solving a class of problems or a design pattern. And sometimes, it's a theorem or a theory. In all these cases, examples provide hooks that help us learn an abstract idea that is initially hard for us to hold in our heads.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 29, 2015 2:10 PM

How Do You Know If It Is Good? You Don't.

In the Paris Review's Garrison Keillor, The Art of Humor No. 2, Keillor thinks back to his decision to become a writer, which left him feeling uncertain about himself:

Someone once asked John Berryman, How do you know if something you've written is good? And John Berryman said, You don't. You never know, and if you need to know then you don't want to be a writer.

This doesn't mean that you don't care about getting better. It means that you aren't doing it to please someone else, or at least that your doing it is not predicated on what someone else thinks. You are doing it because that's what you think about. It means that you keep writing, whether it's good or not. That's how you get better.

It's always fun to watch our students wrestle with this sort of uncertainty and come out on the other side of the darkness. Last fall, I taught first-semester freshmen who were just beginning to find out if they wanted to be programmers or computer scientists, asking questions and learning a lot about themselves. This fall, I'm teaching our senior project course, with students who are nearing the end of their time at the university. Many of them think a lot about programming and programming languages, and they will drive the course with their questions and intensity. As a teacher, I enjoy both ends of the spectrum.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 27, 2015 2:23 PM

The Flip Side to "Programming for All"

a thin volume of William Blake

We all hear the common refrain these days that more people should learn to program, not just CS majors. I agree. If you know how to program, you can make things. Even if you don't write many programs yourself, you are better prepared to talk to the programmers who make things for you. And even if you don't need to talk to programmers, you have expanded your mind a bit to a way of thinking that is changing the world we live in.

But there are two sides to this equation, as Chris Crawford laments in his essay, Fundamentals of Interactivity:

Why is it that our entertainment software has such primitive algorithms in it? The answer lies in the people creating them. The majority are programmers. Programmers aren't really idea people; they're technical people. Yes, they use their brains a great deal in their jobs. But they don't live in the world of ideas. Scan a programmer's bookshelf and you'll find mostly technical manuals plus a handful of science fiction novels. That's about the extent of their reading habits. Ask a programmer about Rabelais, Vivaldi, Boethius, Mendel, Voltaire, Churchill, or Van Gogh, and you'll draw a blank. Gene pools? Grimm's Law? Gresham's Law? Negentropy? Fluxions? The mind-body problem? Most programmers cannot be troubled with such trivia. So how can we expect them to have interesting ideas to put into their algorithms? The result is unsurprising: the algorithms in most entertainment products are boring, predictable, uninformed, and pedestrian. They're about as interesting in conversation as the programmers themselves.

We do have some idea people working on interactive entertainment; more of them show up in multimedia than in games. Unfortunately, most of the idea people can't program. They refuse to learn the technology well enough to express themselves in the language of the medium. I don't understand this cruel joke that Fate has played upon the industry: programmers have no ideas and idea people can't program. Arg!

My office bookshelf occasionally elicits a comment or two from first-time visitors, because even here at work I have a complete works of Shakespeare, a thin volume of William Blake (I love me some Blake!), several philosophy books, and "The Brittanica Book of Usage". I really should have some Voltaire here, too. I do cover one of Crawford's bases: a recent blog entry made a software analogy to Gresham's Law.

In general, I think you're more likely to find a computer scientist who knows some literature than you are to find a literary professional who knows much CS. That's partly an artifact of our school system and partly a result of the wider range historically of literature and the humanities. It's fun to run into a colleague from across campus who has read deeply in some area of science or math, but rare.

However, we are all prone to fall into the chasm of our own specialties and miss out on the well-roundedness that makes us better at whatever specialty we practice. That's one reason that, when high school students and their parents ask me what students should take to prepare for a CS major, I tell them: four years of all the major subjects, including English, math, science, social science, and the arts; plus whatever else interests them, because that's often where they will learn the most. All of these topics help students to become better computer scientists, and better people.

And, not surprisingly, better game developers. I agree with Crawford that more programmers should be learn enough other stuff to be idea people, too. Even if they don't make games.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 29, 2015 1:58 PM

Bridging the Gap Between Learning and Doing

a sketch of bridging the gap

I recently learned about the work of Amelia McNamara via this paper published as Research Memo M-2014-002 by the Viewpoints Research Institute. McNamara is attacking an important problem: the gap between programming tools for beginners and programming tools for practitioners. In Future of Statistical Programming, she writes:

The basic idea is that there's a gap between the tools we use for teaching/learning statistics, and the tools we use for doing statistics. Worse than that, there's no trajectory to make the connection between the tools for learning statistics and the tools for doing statistics. I think that learners of statistics should also be doers of statistics. So, a tool for statistical programming should be able to step learners from learning statistics and statistical programming to truly doing data analysis.

"Learners of statistics should also be doers of statistics." -- yes, indeed. We see the same gap in computer science. People who are learning to program are programmers. They are just working at a different level of abstraction and complexity. It's always a bit awkward, and often misleading, when we give novice programmers a different set of tools than we give professionals. Then we face a new learning barrier when we ask them to move up to professional tools.

That doesn't mean that we should turn students loose unprotected in the wilds of C++, but it does require that that we have a pedagogically sound trajectory for making the connection between novice languages and tools and those used by more advanced programmers.

It also doesn't mean that we can simply choose a professional language that is in some ways suitable for beginners, such as Python, and not think any more about the gap. My recent experience reminds me that there is still a lot of complexity to help our students deal with.

McNamara's Ph.D. dissertation explored some of the ways to bridge this gap in the realm of statistics. It starts from the position that the gap should not exist and suggests ways to bridge it, via both better curricula and better tools.

Whenever I experience this gap in my teaching or see researchers trying to make it go away, I think back to Alan Kay's early vision for Smalltalk. One of the central tenets of the Smalltalk agenda was to create a language flexible and rich enough that it could accompany the beginner as he or she grew in knowledge and skill, opening up to a new level each time the learner was ready for something more powerful. Just as a kindergartener learns the same English language used by Shakespeare and Joyce, a beginning programmer might learn the same language as Knuth and Steele, one that opens up to a new level each time the learner is ready.

We in CS haven't done especially good job at this over the years. Matthias Felleisen and the How to Design Programs crew have made perhaps the most successful effort thus far. (See *SL, Not Racket for a short note on the idea.) But this project has not made a lot of headway yet in CS education. Perhaps projects such as McNamara's can help make inroads for domain-specific programmers. Alan Kay may harbor a similar hope; he served as a member of McNamara's Ph.D. committee.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 16, 2015 3:17 PM

Dr. Seuss on Research

Reading about the unusual ideas in TempleOS reminded me of a piece of advice I received from the great philosopher of epistemology, Dr. Seuss:

If you want to get eggs
you can't buy at a store,
You have to do things
never thought of before.

As Peter T. Hooper learned in Scrambled Eggs Super, discovering or creating something new requires that we think unusual, or even outrageous, thoughts.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 09, 2015 2:48 PM

I'm Behind on Blogging About My Courses...

... so much so, that I may never catch up. The last year and a half have been crazy, and I simply have not set aside enough time to blog. A big part of the time crunch was teaching three heavy preps in 2014: algorithms, agile software development, and our intro course. It is fitting, then, that blogging about my courses has suffered most of all -- even though, in the moment, I often have plenty to say. Offhand, I can think of several posts for which I once had big plans and for which I still have drafts or outlines sitting in my ideas/ folder:

  • readers' thoughts on teaching algorithms in 2014, along with changes I made to my course. Short version: The old canon still covers most of the important bases.
  • reflections on teaching agile dev again after four years. Short version: The best learning still happens in the trenches working with the students, who occasionally perplex me and often amaze me.
  • reflections on teaching Python in the intro for the first course for the first time. Short version: On balance, there are many positives, but wow, there is a lot of language there, and way too many resources.
  • a lament on teaching programming languages principles when the students don't seem to connect with the material. Surprise ending: Some students enjoyed the course more than I realized.

Thoughts on teaching Python stand out as especially trenchant even many months later. The intro course is so important, because it creates habits and mindsets in students that often long outlive the course. Teaching a large, powerful, popular programming language to beginners in the era of Google, Bing, and DuckDuckGo is a Sisyphean task. No matter how we try to guide the students' introduction to language features, the Almighty Search Engine sits ever at the ready, delivering size and complexity when they really need simple answers. Maybe we need language levels a lá the HtDP folks.

Alas, my backlog is so deep that I doubt I will ever have time to cover much of it. Life goes on, and new ideas pop up every day. Perhaps I can make time the posts outlined above.

Right now, my excitement comes from the prospect of teaching my compilers course again for the first time in two years. The standard material still provides a solid foundation for students who are heading off into the the world of software development. But in the time since I last taught the course, some neat things have happened in the compiler world that will make the course better, if only by putting the old stuff into a more modern context. Consider announcements just this week about Swift, in particular that the source code is being open-sourced and the run-time ported to Linux. The moment these two things happen, the language instantly becomes of greater interest to more of my students. Its openness also makes it more suitable as content for a university course.

So, there will be plenty to blog about, even if I leave my backlog untouched. That's a good thing.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 22, 2015 1:58 PM

When It Comes to Learning, Motivation and Study Habits Trump Technology

A lot of people have been talking about Kentaro Toyama's Why Technology Will Never Fix Education, which appeared in the Chronicle of Higher Education earlier this week. Here is the money paragraph:

The real obstacle in education remains student motivation. Especially in an age of informational abundance, getting access to knowledge isn't the bottleneck, mustering the will to master it is. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure. Most students are seeking credentials that graduate schools and employers will take seriously and an environment in which they're prodded to do the work. But neither of these things is cheaply available online.

My wife just completed the second of two long-term substitute teaching assignments this year in a local elementary school, so we have been discussing the daily challenges that teachers face. The combination of student motivation and support at home account for most of the variation in how well students perform and how well any given class operates. I see a similar pattern at the university. By the time students reach me, the long-term effects of strong or weak support at home has crystallized into study habits and skills. The combination of student motivation and study skills account for most of the variation I see in whether students succeed or struggle in their university courses.

This all reminds me of a short passage from Tyler Cowen's book, Average Is Over:

The more information that's out there, the greater the returns to just being willing to sit down and apply yourself. Information isn't what's scarce; it's the willingness to do something with it.

The easy availability of information made possible by technology places a higher premium on the ability of students to sit down and work hard, and the willingness to do so. We can fool ourselves into thinking we know more than we do when we look things up quickly, but many students can just as easily access the same information.

We have found ways to use technology to make information easily available, but we haven't found a way to make motivation an abundant resource. Motivation has to come from within. So do the skills needed to use the information. We can at least help students develop study habits and skills through school and family life, though these are best learned early in life. It is hard for students to change 15-20 years of bad habits after they get to college.

The irony for young people is that, while they live in an era of increasingly available information, the onus rests more than ever on what they do. That is both good news and bad.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 13, 2015 12:12 PM

The Big Picture

I just turned in my grades. For the most part, this is boring paperwork. Neither students nor faculty really like grades; they are something we all have to do as a part of the system. A lot of people would like to change the system and eliminate grades, but every alternative has its own weaknesses. So we all muddle along.

But there is a bigger picture, one which Matt Reed expresses eloquently:

Tolstoy once claimed that there are really only two stories, and we keep telling each of them over and over again: a stranger comes to town, and a hero goes on a quest. In higher education, we live those two stories continuously. Every semester, a new crop of strangers come to town. And every semester, we set a new group of heroes off on their respective quests.

It's May, so we see a new group of young people set of on their own quests. In a few months, we will welcome a new group of strangers. In between are the students who are in the middle of their own "stranger comes to town" story, who will return to us in the fall a little different yet much the same.

That's the big picture.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 09, 2015 9:28 AM

A Few Thoughts on Graduation Day

Today is graduation day for the Class of 2015 at my university. CS students head out into the world, most with a job in hand or nearly so, ready to apply their hard-earned knowledge and skills to all variety of problems. It's an exciting time for them.

This week also brought two other events that have me thinking about the world in which my students my will live and the ways in which we have prepared them. First, on Thursday, the Technology Association of Iowa organized a #TechTownHall on campus, where the discussion centered on creating and retaining a pool of educated people to participate in, and help grow, the local tech sector. I'm a little concerned that the TAI blog says that "A major topic was curriculum and preparing students to provide immediate value to technology employers upon graduation." That's not what universities do best. But then, that is often what employers want and need.

Second, over the last two mornings, I read James Fallows's classic The Case Against Credentialism, from the archives of The Atlantic. Fallows gives a detailed account of the "professionalization" of many lines of work in the US and the role that credentials, most prominently university degrees, have played in the movement. He concludes that our current approach is biased heavily toward evaluating the "inputs" to the system, such as early success in school and other demonstrations of talent while young, rather than assessing the outputs, namely, how well people actually perform after earning their credentials.

Two passages toward the end stood out for me. In one, Fallows wonders if our professionalized society creates the wrong kind of incentives for young people:

An entrepreneurial society is like a game of draw poker; you take a lot of chances, because you're rarely dealt a pat hand and you never know exactly what you have to beat. A professionalized society is more like blackjack, and getting a degree is like being dealt nineteen. You could try for more, but why?

Keep in mind that this article appeared in 1985. Entrepreneurship has taken a much bigger share of the public conversation since then, especially in the teach world. Still, most students graduating from college these days are likely thinking of ways to convert their nineteens into steady careers, not ways to risk it all on the next Amazon or Über.

Then this quote from "Steven Ballmer, a twenty-nine-year-old vice-president of Microsoft", on how the company looked for new employees:

We go to colleges not so much because we give a damn about the credential but because it's hard to find other places where you have large concentrations of smart people and somebody will arrange the interviews for you. But we also have a lot of walk-on talent. We're looking for programming talent, and the degree is in no way, shape, or form very important. We ask them to send us a program they've written that they're proud of. One of our superstars here is a guy who literally walked in off the street. We talked him out of going to college and he's been here ever since.

Who would have guessed in 1985 the visibility and impact that Ballmer would have over the next twenty years? Microsoft has since evolved from the entrepreneurial upstart to the staid behemoth, and now is trying to reposition itself as an important player in the new world of start-ups and mobile technology.

Attentive readers of this blog may recall that I fantasize occasionally about throwing off the shackles of the modern university, which grow more restrictive every year as the university takes on more of the attributes of corporate and government bureaucracy. In one of my fantasies, I organize a new kind of preparatory school for prospective software developers, one with a more modern view of learning to program but also an attention to developing the whole person. That might not satisfy corporate America's need for credentials, but it may well prepare students better for a world that needs poker players as much as it needs blackjack players. But where would the students come from?

So, on a cloudy graduation day, I think about Fallows's suggestion that more focused vocational training is what many grads need, about the real value of a liberal university education to both students and society, and about how we can best prepare CS students participate to in the world. It is a world that needs not only their technical skills but also their understanding of what tech can and cannot do. As a society, we need them to take a prominent role in civic and political discourse.

One final note on the Fallows piece. It is a bit long, dragging a bit in the middle like a college research paper, but opens and closes strongly. With a little skimming through parts of less interest, it is worth a read. Thanks to Brian Marick for the recommendation.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

April 29, 2015 1:52 PM

Beautiful Sentences: Scientific Data as Program

On the way to making a larger point about the role of software in scientific research, Konrad Hinsen writes these beautiful sentences:

Software is just data that can be interpreted as instructions for a computer. One could conceivably write some interpreter that turns previously generated data into software by executing it.

They express one side of one of the great ideas of computer science, the duality of program and data:

  • Every program is data to some other program, and
  • every set of data is a program to some machine.

This is one of the reasons why it is so important for CS students to study the principles of programming languages, create languages, and build interpreters. These activities help bring this great idea to life and prepare those who understand it to solve problems in ways that are otherwise hard to imagine.

Besides, the duality is a thing of beauty. We don't have to use it as a tool in order to appreciate this sublime truth.

As Hinsen writes, few people outside of computer science (and, sadly, too many within CS) appreciate "the particular status of software as both tool an information carrier and a tool". The same might be said for our appreciation of data, and the role that language plays in bridging the gap between the two.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 26, 2015 9:55 AM

Yesterday's Questions Can Have Different Answers Today

I wrote on Twitter Thursday [ 1 | 2 ] that I end up modifying my lecture notes every semester, no matter how well done they were the last time I taught the course. From one semester to the next, I find that I am more likely to change the introductions, transitions, and conclusions of a session than the body. The intros, transitions, and conclusions help to situate the material in a given place and time: the context of this semester and this set of students. The content, once refined, tends to stabilize, though occasionally I feel a need to present even it in a different way, to fit the current semester.

Novelist Italo Calvino knew this feeling as well, when he was preparing to be interviewed:

Rarely does an interviewer ask questions you did not expect. I have given a lot of interviews and I have concluded that the questions always look alike. I could always give the same answers. But I believe I have to change my answers because with each interview something has changed either inside myself or in the world. An answer that was right the first time may not be right again the second.

This echoes my experience preparing for lecture. The answer that was right the last time does not seem right again this time. Sometimes, I have changed. With any luck, I have learned new things since the last time I taught the course, and that makes for a better story. Sometimes, the world has changed: a new programming language such as Clojure or Scala has burst onto the scene, or a new trend in industry such as mobile app development has made a different set of issues relevant to the course. I need to tell a different story that acknowledges -- and takes advantage of -- these changes.

Something else always changes for a teacher, too: the students. It's certainly true the students in the class are different every time I teach a course. But sometimes, the group is so different from past groups that the old examples, stories, and answers just don't seem to work. Such has been the case for me this semester. I've had to work quite a bit to understand how my students think and incorporate that into my class sessions and homework assignments. This is part of the fun and challenge of being a teacher.

We have to be careful not to take this analogy too far. Teaching computer science is different from an author giving an interview about his or her life. For one thing, there is a more formal sense of objective truth in the content of, say, a programming language course. An object is still a closure; a closure is still an object that other code can interact with over time. These answers tend to stay the same over time. But even as a course communicates the same fundamental truths from semester to semester, the stories we need to tell about these truths will change.

Ever the fantastic writer, Calvino saw in his interview experience the shape of a new story, a meta-story of sorts:

This could be the basis of a book. I am given a list of questions, always the same; every chapter would contain the answers I would give at different times. ... The changes would then become the itinerary, the story that the protagonist lives. Perhaps in this way I could discover some truths about myself.

This is one of the things I like about teaching. I often discover truths about myself, and occasionally transform myself.

~~~~

The passages quote above come from The Art of Fiction No. 130, Italo Calvino in The Paris Review. It's not the usual Paris Review interview, as Calvino died before the interviewer was done. Instead, it is a pastiche of four different sources. It's a great read nonetheless.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 20, 2015 4:02 PM

"Disjunctive Inference" and Learning to Program

Over the weekend, I read Hypothetical Reasoning and Failures of Disjunctive Inference, a well-sourced article on the problems people have making disjunctive inferences. It made me think about some of the challenges students have learning to program.

Disjunctive inference is reasoning that requires us to consider hypotheticals. A simple example from the article is "the married problem":

Jack is looking at Ann, but Ann is looking at George. Jack is married, but George is not. Is a married person looking at an unmarried person?
  1. Yes.
  2. No.
  3. Cannot be determined.

The answer is yes, of course, which is obvious if we consider the two possible cases for Ann. Most people, though, stop thinking as soon as they realize that the answer hinges on Ann's status. They don't know her status, so they can't know the answer to the question. Even so, most everyone understands the answer as soon as the reasoning is explained to them.

The reasons behind our difficulties handling disjunctive inferences are complex, including both general difficulties we have with hypotheticals and a cognitive bias sometimes called cognitive miserliness: we seek to apply the minimum amount of effort to solving problems and making decisions. This is a reasonable evolutionary bias in many circumstances, but here it is maladaptive.

The article is fascinating and well worth a full read. It points to a number of studies in cognitive psychology that seek to understand how humans behave in the face if disjunctive inferences, and why. It closes with some thoughts on improving disjunctive reasoning ability, though there are no quick fixes.

As I read the article, it occurred to me that learning to program places our students in a near-constant state of hypothetical reasoning and disjunctive inference. Tracing code that contains an if statement asks them to think alternative paths and alternative outcomes. To understand what is true after the if statement executes is disjunctive inference.

Something similar may be true for a for loop, which executes once each for multiple values of a counter, and a while loop, which runs an indeterminate number of times. These aren't disjunctive inferences, but they do require students to think hypothetically. I wonder if the trouble many of my intro CS students had last semester learning function calls involved failures of hypothetical reasoning as much as it involves difficulties with generalization.

And think about learning to debug a program.... How much of that process involves hypotheticals and even full-on disjunctive inference? If most people have trouble with this sort of reasoning even on simple tasks, imagine how much harder it must be for young people who are learning a programming language for the first time and trying to reason about programs that are much more complex than "the married problem"?

Thinking explicitly about this flaw in human thinking may help us teachers do a better job helping students to learn. In the short term, we can help them by giving more direct prompts for how to reason. Perhaps we can also help them learn to prompt themselves when faced with certain kinds of problems. In the longer term, we can perhaps help them to develop a process for solving problems that mitigates the bias. This is all about forming useful habits of thought.

If nothing else, reading this article will help me be slower to judge my students's work ethic. What looks like laziness is more likely a manifestation of a natural bias to exert the minimum amount of effort to solving problems. We are all cognitive misers to a certain extent, and that serves us well. But not always when we are writing and debugging programs.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 09, 2015 3:26 PM

Two Factors for Succeeding at Research, or Investing

Think differently, of course. But be humble. These attitudes go hand-in-hand.

To make money in the markets, you have to think independently and be humble. You have to be an independent thinker because you can't make money agreeing with the consensus view, which is already embedded in the price. Yet whenever you're betting against the consensus there's a significant probability you're going to be wrong, so you have to be humble.

This applies equally well to doing research. You can't make substantial progress with the conventional wisdom, because it defines and limits the scope of the solution. So think differently. But when you leave the safety of conventional wisdom, you find yourself working in an immense space of ideas. There is a significant chance that you will be wrong a lot. So be humble.

(The quote is from Learn or Die: Using Science to Build a Leading-Edge Learning Organization by Ray Dalio.)


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 08, 2015 2:14 PM

Teaching and the Transformation of Self

After reading What Wittgenstein Learned from Teaching Elementary School in The Paris Review, I think that, while Wittgenstein seems to have had some ideas for making elementary education better, I probably wouldn't want to have him as my teacher. This passage, though, really stuck with me:

We all struggle to form a self. Great teaching, Wittgenstein reminds us, involves taking this struggle and engaging in it with others; his whole life was one great such struggle. In working with poor children, he wanted to transform himself, and them.

The experience of teaching grade school for six years seems to have changed Wittgenstein and how he worked. In his later work, he famously moved away from the idea that language could only function by picturing objects in the world. There is no direct evidence that working with children was the impetus for this shift, but "his later work is full of references to teaching and children". In particular, Philosophical Investigations begins its investigation of "the essence of language" by discussing how children learn language.

And Wittgenstein is sometimes explicit about the connection; he once said that in considering the meaning of a word, it's helpful to ask, "How would one set about teaching a child to use this word?"

We all know that teaching can change the student, but foremost it changes the teacher. Wittgenstein seems to have understood that this is a part of the universal task of forming one's self.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 24, 2015 3:40 PM

Some Thoughts on How to Teach Programming Better

In How Stephen King Teaches Writing, Jessica Lahey asks Stephen King why we should teach grammar:

Lahey: You write, "One either absorbs the grammatical principles of one's native language in conversation and in reading or one does not." If this is true, why teach grammar in school at all? Why bother to name the parts?

King: When we name the parts, we take away the mystery and turn writing into a problem that can be solved. I used to tell them that if you could put together a model car or assemble a piece of furniture from directions, you could write a sentence. Reading is the key, though. A kid who grows up hearing "It don't matter to me" can only learn doesn't if he/she reads it over and over again.

There are at least three nice ideas in King's answer.

  • It is helpful to beginners when we can turn writing into a problem that can be solved. Making concrete things out of ideas in our head is hard. When we giving students tools and techniques that help them to create basic sentences, paragraphs, and stories, we make the process of creating a bit more concrete and a bit less scary.

  • A first step in this direction is to give names to the things and ideas students need to think about when writing. We don't want students to memorize the names for their own sake; that's a step in the wrong direction. We simply need to have words for talking about the things we need to talk about -- and think about.

  • Reading is, as the old slogan tells us, fundamental. It helps to build knowledge of vocabulary, grammar, usage, and style in a way that the brain absorbs naturally. It creates habits of thought that are hard to undo later.

All of these are true of teaching programmers, too, in their own way.

  • We need ways to demystify the process and give students concrete steps they can take when they encounter a new problem. The design recipe used in the How to Design Programs approach is a great example. Naming recipes and their steps makes them a part of the vocabulary teachers and students can use to make programming a repeatable, reliable process.

  • I've often had great success by giving names to design and implementation patterns, and having those patterns become part of the vocabulary we use to discuss problems and solutions. I have a category for posts about patterns, and a fair number of those relate to teaching beginners. I wish there were more.

  • Finally, while it may not be practical to have students read a hundred programs before writing their first, we cannot underestimate the importance of students reading code in parallel with learning to write code. Reading lots of good examples is a great way for students to absorb ideas about how to write their own code. It also gives them the raw material they need to ask questions. I've long thought that Clancy's and Linn's work on case studies of programming deserves more attention.

Finding ways to integrate design recipes, patterns, and case studies is an area I'd like to explore more in my own teaching.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 10, 2015 4:45 PM

Learning to Program is a Loser's Game

After a long break from playing chess, I recently played a few games at the local club. Playing a couple of games twice in the last two weeks has reminded me that I am very rusty. I've only made two horrible blunders in four games, but I have made many small mistakes, the kind of errors that accumulate over time and make a position hard to defend, even untenable. Having played better in years past, these inaccuracies are irksome.

Still, I managed to win all four games. As I've watched games at the club, I've noticed that most games are won by the player who makes the second-to-last blunder. Most of the players are novices, and they trade mistakes: one player leaves his queen en prise; later, his opponent launches an underprepared attack that loses a rook; then the first player trades pieces and leaves himself with a terrible pawn structure -- and so on, the players trading weak or bad moves until the position is lost for one of them.

My secret thus far has been one part luck, one part simple strategy: winning by not losing.

This experience reminded me of a paper called The Loser's Game, which in 1975 suggested that it was no longer possible for a fund manager to beat market averages over time because most of the information needed to do well was available to everyone. To outperform the market average, a fund manager has to profit from mistakes made by other managers, sufficiently often and by a sufficient margin to sustain a long-term advantage. Charles Ellis, the author, contrasts this with the bull markets of the 1960s. Then, managers made profits based on the specific winning investments they made; in the future, though, the best a manager could hope for was not to make the mistakes that other investors would profit from. Fund management had transformed from being a Winner's Game to a Loser's Game.

the cover of Extraordinary Tennis for the Ordinary Tennis Player

Ellis drew his inspiration from another world, too. Simon Ramo had pointed out the differences between a Winner's Game and a Loser's Game in Extraordinary Tennis for the Ordinary Tennis Player. Professional tennis players, Ramo said, win based on the positive actions they take: unreturnable shots down the baseline, passing shots out of the reach of a player at the net, service aces, and so on. We duffers try to emulate our heroes and fail... We hit our deep shots just beyond the baseline, our passing shots just wide of the sideline, and our killer serves into the net. It turns out that mediocre players win based on the errors they don't make. They keep the ball in play, and eventually their opponents make a mistake and lose the point.

Ramo saw that tennis pros are playing a Winner's Game, and average players are playing a Loser's Game. These are fundamentally different games, which reward different mindsets and different strategies. Ellis saw the same thing in the investing world, but as part of a structural shift: what had once been a Winner's Game was now a Loser's Game, to the consternation of fund managers whose mindset is finding the stocks that will earn them big returns. The safer play now, Ellis says, is to minimize mistakes. (This is good news for us amateurs investors!)

This is the same phenomenon I've been seeing at the chess club recently. The novices there are still playing a Loser's Game, where the greatest reward comes to those who make the fewest and smallest mistakes. That's not very exciting, especially for someone who fancies herself to be Adolf Anderssen or Mikhail Tal in search of an immortal game. The best way to win is to stay alive, making moves that are as sound as possible, and wait for the swashbuckler across the board from you to lose the game.

What does this have to do with learning to program? I think that, in many respects, learning to program is a Loser's Game. Even a seemingly beginner-friendly programming language such as Python has an exacting syntax compared to what beginners are used to. The semantics seem foreign, even opaque. It is easy to make a small mistake that chokes the compiler, which then spews an error message that overwhelms the new programmer. The student struggles to fix the error, only to find another error waiting somewhere else in the code. Or he introduces a new error while eliminating the old one, which makes even debugging seem scary. Over time, this can dishearten even the heartiest beginner.

What is the best way to succeed? As in all Loser's Games, the key is to make fewer mistakes: follow examples closely, pay careful attention to syntactic details, and otherwise not stray too far from what you are reading about and using in class. Another path to success is to make the mistakes smaller and less intimidating: take small steps, test the code frequently, and grow solutions rather than write them all at once. It is no accident that the latter sounds like XP and other agile methods; they help to guard us from the Loser's Game and enable us to make better moves.

Just as playing the Loser's Game in tennis or investing calls for a different mindset, so, too does learning to program. Some beginners seem to grok programming quickly and move on to designing and coding brilliantly, but most of us have to settle in for a period of discipline and growth. It may not be exciting to follow examples closely when we want to forge ahead quickly to big ideas, but the alternative is to take big shots and let the compiler win all the battles.

Unlike tennis and Ellis's view of stock investing, programming offers us hope: Nearly all of us can make the transition from the Loser's Game to the Winner's Game. We are not destined to forever play it safe. With practice and time, we can develop the discipline and skills necessary to making bold, winning moves. We just have to be patient and put time and energy into the process of becoming less mistake-prone. By adopting the mindset needed to succeed in a Loser's Game, we can eventually play the Winner's Game.

I'm not too sure about the phrases "Loser's Game" and "Winner's Game", but I think that this analogy can help novice programmers. I'm thinking of ways that I can use it to help my students survive until they can succeed.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

March 02, 2015 4:14 PM

Knowing When We Don't Understand

Via a circuitous walk of web links, this morning I read an old piece called Two More Things to Unlearn from School, which opens:

I suspect the *most* dangerous habit of thought taught in schools is that even if you don't really understand something, you should parrot it back anyway. One of the most fundamental life skills is realizing when you are confused, and school actively destroys this ability -- teaches students that they "understand" when they can successfully answer questions on an exam, which is very very very far from absorbing the knowledge and making it a part of you. Students learn the habit that eating consists of putting food into mouth; the exams can't test for chewing or swallowing, and so they starve.

Schools don't teach this habit explicitly, but they allow it to develop and grow without check. This is one of the things that makes computer science hard for students. You can only get so far by parroting back answers you don't understand. Eventually, you have to write a program or prove an assertion, and all the memorization of facts in the world can't help you.

That said, though, I think students know very well when when they don't understand something. Many of my students struggle with the things they don't understand. But, as Yudkowsky says, they face the time constraints of a course fitting into a fifteen-week window and of one course competing with others for their time. The habit have they developed over the years is to think that, in the end, not understanding is okay, or at least an acceptable outcome of the course. As long as they get the grade they need to move on, they'll have another chance to get it later. And maybe they won't ever need to understand this thing ever again...

One of the best things we can do for students is to ask them to make things and to discuss with them the things they made, and how they made them. This is a sort of intellectual work that requires a deeper understanding than merely factual. It also forces them to consider the choices and trade-offs that characterize real knowledge.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 27, 2015 3:37 PM

Bad Habits and Haphazard Design

With an expressive type system for its teaching
languages, HtDP could avoid this problem to some
extent, but adding such rich types would also take
the fun out of programming.

As we approach the midpoint of the semester, Matthias Felleisen's Turing Is Useless strikes a chord in me. My students have spent the last two months learning a little Racket, a little functional programming, and a little about how to write data-driven recursive programs. Yet bad habits learned in their previous courses, or at least unchecked by what they learned there, have made the task harder for many of them than it needed to be.

The essay's title plays off the Church-Turing thesis, which asserts that all programming languages have the same expressive power. This powerful claim is not good news for students who are learning to program, though:

Pragmatically speaking, the thesis is completely useless at best -- because it provides no guideline whatsoever as to how to construct programs -- and misleading at worst -- because it suggests any program is a good program.

With a Turing-universal language, a clever student can find a way to solve any problem with some program. Even uninspired but persistent students can tinker their way to a program that produces the right answers. Unfortunately, they don't understand that the right answers aren't the point; the right program is. Trolling StackOverflow will get them a program, but too often the students don't understand whether it is a good or bad program in their current situation. It just works.

I have not been as faithful to the HtDP approach this semester as I probably should have been, but I share its desire to help students to design programs systematically. We have looked at design patterns that implement specific strategies, not language features. Each strategy focuses on the definition of the data being processed and the definition of the value being produced. This has great value for me as the instructor, because I can usually see right away why a function isn't working for the student the way he or she intended: they have strayed from the data as defined by the problem.

This is also of great value to some of my students. They want to learn how to program in a reliable way, and having tools that guide their thinking is more important than finding yet another primitive Racket procedure to try. For others, though "garage programming" is good enough; they just want get the job done right now, regardless of which muscles they use. Design is not part of their attitude, and that's a hard habit to break. How use doth breed a habit in a student!

Last semester, I taught intro CS from what Felleisen calls a traditional text. Coupled that experience with my experience so far this semester, I'm thinking a lot these days about how we can help students develop a design-centered attitude at the outset of their undergrad courses. I have several blog entries in draft form about last semester, but one thing that stands out is the extent to which every step in the instruction is driven by the next cool programming construct. Put them all on the table, fiddle around for a while, and you'll make something that works. One conclusion we can draw from the Church-Turing thesis is that this isn't surprising. Unfortunately, odds are any program created this way is not a very good program.

~~~~~

(The sentence near the end that sounds like Shakespeare is. It's from The Two Gentlemen of Verona, with a suitable change in noun.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

February 24, 2015 3:03 PM

Why Can't I Use flatten on This Assignment?

The team is in the weight room. Your coach wants you to work on your upper-body strength and so has assigned you a regimen that includes bench presses and exercises for your lats and delts. You are on the bench, struggling.

"Hey, coach, this is pretty hard. Can I use my legs to help lift this weight?"

The coach shakes his head and wonders what he is going to do with you.

Using your legs is precisely not the point. You need to make your other muscles stronger. Stick to the bench.

~~~~

If athletics isn't your thing, we can tell this story with a piano. Or a pen and paper. Or a chessboard.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

February 21, 2015 11:11 AM

Matthias, Speak to My Students

I love this answer from Matthias Felleisen on the Racket users mailing list today:

The book emphasizes systematic design. You can solve this specific problem with brute force regular-expression matching in a few lines of code. The question is whether you want to learn to solve problems or copy code from mailing lists and StackOverflow without understanding what's really going on.

Students today aren't much different from the students in the good old days. But the tools and information so readily available to them make it a lot easier for them to indulge their baser urges. In the good old days, we had to work hard to get good grades and not understand what we were doing.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 05, 2015 3:57 PM

If You Want to Become a Better Writer...

... write for undergraduates. Why?

Last fall, Steven Pinker took a stab at explaining why academics stink at writing. He hypothesizes that cognitive science and human psychology explain much of the problem. Experts often find it difficult to imagine that others do not know what experts know, which Pinker calls the curse of knowledge. They work around the limitations of short-term memory by packaging ideas into bigger and more abstract units, often called chunking. Finally, they tend to think about the things they understand well in terms of how they use them, not in terms of what they look like, a transition called functional fixity.

Toward the end of the article, Pinker summarizes:

The curse of knowledge, in combination with chunking and functional fixity, helps make sense of the paradox that classic style is difficult to master. What could be so hard about pretending to open your eyes and hold up your end of a conversation? The reason it's harder than it sounds is that if you are enough of an expert in a topic to have something to say about it, you have probably come to think about it in abstract chunks and functional labels that are now second nature to you but are still unfamiliar to your readers--and you are the last one to realize it.

Most academics aren't trying to write bad prose. They simply don't have enough practice writing good prose.

When Calvin explained to Hobbes, "With a little practice, writing can be an intimidating and impenetrable fog," he got it backward. Fog comes easily to writers; it's the clarity that requires practice. The naïve realism and breezy conversation in classic style are deceptive, an artifice constructed through effort and skill.

Wanting to write better is not sufficient. Exorcising the curse requires writers to learn new skills and to practice. One of the best ways to see if the effort is paying off is to get feedback: show the work to real readers and see if they can follow it.

That's where undergraduates come in. If you want to become a better writer or a better speaker, teach undergraduates regularly. They are about as far removed as you can get from an expert while still having an interest in the topic and some inclination to learn more about it.

When I write lecture notes for my undergrads, I have to eliminate as much jargon as possible. I have to work hard to put topics into the best order for learners, not for colleagues who are also expert in the area. I have to find stories to illuminate ideas, and examples to illustrate them. When I miss the intended mark on any of these attempts, my students will let me know, either through their questions or through their inability to perform as I expected. And then I try again.

My lecture notes are far from perfect, but they are always much better after a few iterations teaching a course than they are the first time I do. The weakest parts tend to be for material I'm adding to the course for the first time; the best parts tend to be revisions of existing material. These facts are no surprise to any writer or presenter, of course. Repetition and effort are how we make things better.

Even if you do not consider yourself a teacher by trade, if you want to improve your ability to communicate science, teach undergrads. Write lecture notes and explanations. Present to live students and monitor lab sessions. The students will reward you with vigorous feedback. Besides, they are good people to get to know.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 28, 2015 3:38 PM

The Relationship Between Coding and Literacy

Many people have been discussing Chris Granger's recent essay Coding is not the New Literacy, and most seem to approve of his argument. Reading it brought to my mind this sentence from Alan Kay in VPRI Memo M-2007-007a, The Real Computer Revolution Hasn't Happened Yet:

Literacy is not just being able to read and write, but being able to deal fluently with the kind of ideas that are important enough to write about and discuss.

Literacy requires both the low-level skills of reading and writing and the higher-order capacity for using them on important ideas.

That is one thing that makes me uneasy about Granger's argument. It is true that teaching people only low-level coding skills won't empower them if they don't know how to use them to use them fluently to build models that matter. But neither will teaching them how to build models without giving them access to the programming skills they need to express their ideas beyond what some tool gives them.

Like Granger, though, I am also uneasy about many of the learn-to-code efforts. Teaching people enough Javascript or Ruby to implement a web site out of the box skips past the critical thinking skills that people need to use computation effectively in their world. They may be "productive" in the short term, but they are also likely to hit a ceiling pretty soon. What then? My guess: they become frustrated and stop coding altogether.

the Scratch logo

We sometimes do a better job introducing programming to kids, because we use tools that allow students to build models they care about and can understand. In the VPRI memo, Kay describes experiences teaching elementary school, students to use eToys to model physical phenomena. In the end, they learn physics and the key ideas underlying calculus. But they also learn the fundamentals of programming, in an environment that opens up into Squeak, a flavor of Smalltalk.

I've seen teachers introduce students to Scratch in a similar way. Scratch is a drag-and-drop programming environment, but it really is a open-ended and lightweight modeling tool. Students can learn low-level coding skills and higher-level thinking skills in tandem.

That is the key to making Granger's idea work in the best way possible. We need to teach people how to think about and build models in a way that naturally evolves into programming. I am reminded of another quote from Alan Kay that I heard back in the 1990s. He reminded us that kindergarteners learn and use the same language that Shakespeare used It is possible for their fluency in the language to grow to the point where they can comprehend some of the greatest literature ever created -- and, if they possess some of Shakepeare's genius, to write their own great literature. English starts small for children, and as they grow, it grows with them. We should aspire to do the same thing for programming.

the logo for Eve

Granger reminds us that literacy is really about composition and comprehension. But it doesn't do much good to teach people how to solidify their thoughts so that they can be written if they don't know how to write. You can't teach composition until your students know basic reading and writing.

Maybe we can find a way to teach people how to think in terms of models and how to implement models in programs at the same time, in a language system that grows along with their understanding. Granger's latest project, Eve, may be a step in that direction. There are plenty of steps left for us to take in the direction of languages like Scratch, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 19, 2015 2:14 PM

Beginners, Experts, and Possibilities

Last Thursday, John Cook tweeted:

Contrary to the Zen proverb, there may be more possibilities in the expert's mind than in the beginner's.

This summed up nicely one of the themes in my Programming Languages course that afternoon. Some students come into the course knowing essentially only one language, say Python or Ada. Others come knowing several languages well, including their intro language, Java, C, and maybe a language they learned on the job, such as Javascript or Scala.

Which group do you think has a larger view of what a programming language can be? The more knowledgable, to be sure. This is especially true when their experience includes languages from different styles: procedural, object-oriented, functional, and so on.

Previous knowledge affects expectations. Students coming directly out of their first year courses are more likely to imagine that all languages are similar to what they already know. Nothing in their experience contradicts that idea.

Does this mean that the Zen notion of beginner's mind is wrongheaded? Not at all. I think an important distinction can be made between analysis and synthesis. In a context where we analyze languages, broad experience is more valuable than lack of experience, because we are able to bring to our seeing a wider range of possibilities. That's certainly my experience working with students over the years.

However, in a context, where we create languages, broad experience can be an impediment. When we have seen many different languages, it it can be difficult to create something that looks much different from the languages what we've already seen. Something in our minds seems to pull us toward an existing language that already solves the constraint they are struggling with. Someone else has already solved this problem; their solution is probably best.

This is also my experience working with students over the years. My freshmen will almost always come up with a fresher language design than my seniors. The freshmen don't know much about languages yet, and so their minds are relatively unconstrained. (Fantastically so, sometimes.) The seniors often seem to end up with something that is superficially new but, at its core, thoroughly predictable.

The value of "Zen mind, beginner's mind" also follows a bit from the distinction between expertise and experience. Experts typically reach a level of where they solve problem using heuristics to solve problems. There patterns and shortcuts are efficient, but they also tend to be "compiled" and not all that open to critical examination. We create best when we are able to modify, rearrange, and discard, and that's harder to do when our default mode of thinking is in pre-compiled units.

It should not bother us that useful adages and proverbs contradict one another. The world is complex. As Bokononists say, Busy, busy, busy.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 18, 2015 10:26 AM

The Infinite Horizon

In Mathematics, Live: A Conversation with Laura DeMarco and Amie Wilkinson, Amie Wilkinson recounts the pivotal moment when she knew she wanted to be a mathematician. Insecure about her abilities in mathematics, unsure about what she wanted to do for a career, and with no encouragement, she hadn't applied to grad school. So:

I came back home to Chicago, and I got a job as an actuary. I enjoyed my work, but I started to feel like there was a hole in my existence. There was something missing. I realized that suddenly my universe had become finite. Anything I had to learn for this job, I could learn eventually. I could easily see the limits of this job, and I realized that with math there were so many things I could imagine that I would never know. That's why I wanted to go back and do math. I love that feeling of this infinite horizon.

After having written software for an insurance company during the summers before and after my senior year in college, I knew all too well the "hole in my existence" that Wilkinson talks about, the shrinking universe of many industry jobs. I was deeply interested in the ideas I had found in Gödel, Escher, Bach, and in the idea of creating an intelligent machine. There seemed no room for those ideas in the corporate world I saw.

I'm not sure when the thought of graduate school first occurred to me, though. My family was blue collar, and I didn't have much exposure to academia until I got to Ball State University. Most of my friends went out to get jobs, just like Wilkinson. I recall applying for a few jobs myself, but I never took the job search all that seriously.

At least some of the credit belongs to one of my CS professors, Dr. William Brown. Dr. Brown was an old IBM guy who seemed to know so much about how to make computers do things, from the lowest-level details of IBM System/360 assembly language and JCL up to the software engineering principles needed to write systems software. When I asked him about graduate school, he talked to me about how to select a school and a Ph.D. advisor. He also talked about the strengths and weaknesses of my preparation, and let me know that even though I had some work to do, I would be able to succeed.

These days, I am lucky even to have such conversations with my students.

For Wilkinson, DeMarco and me, academia was a natural next step in our pursuit of the infinite horizon. But I now know that we are fortunate to work in disciplines where a lot of the interesting questions are being asked and answers by people working in "the industry". I watch with admiration as many of my colleagues do amazing things while working for companies large and small. Computer science offers so many opportunities to explore the unknown.

Reading Wilkinson's recollection brought a flood of memories to mind. I'm sure I wasn't alone in smiling at her nod to finite worlds and infinite horizons. We have a lot to be thankful for.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

January 12, 2015 10:26 AM

WTF Problems and Answers for Questions Unasked

Dan Meyer quotes Scott Farrand in WTF Math Problems:

Anything that makes students ask the question that you plan to answer in the lesson is good, because answering questions that haven't been asked is inherently uninteresting.

My challenge this semester: getting students to ask questions about the programming languages they use and how they work. I myself have many questions about languages! My experience teaching our intro course last semester reminded me that what interests me (and textbook authors) doesn't always interest my students.

If you have any WTF? problems for a programming languages course, please share.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 09, 2015 3:40 PM

Computer Science Everywhere, Military Edition

Military Operations Orders are programs that are executed by units. Code re-use and other software engineering principles applied regularly to these.

An alumnus of my department, a CS major-turned-military officer, wrote those lines in an e-mail responding to my recent post, A Little CS Would Help a Lot of College Grads. Contrary to what many people might imagine, he has found what he learned in computer science to be quite useful to him as an Army captain. And he wasn't even a programmer:

One of the biggest skills I had over my peers was organizing information. I wasn't writing code, but I was handling lots of data and designing systems for that data. Organizing information in a way that was easy to present to my superiors was a breeze and having all the supporting data easily accessible came naturally to me.

Skills and principles from software engineering and project development apply to systems other than software. They also provide a vocabulary for talking about ideas that non-programmers encounter every day:

I did introduce my units to the terms border cases, special cases, and layers of abstraction. I cracked a smile every time I heard those terms used in a meeting.

Excel may not be a "real programming language", but knowing the ways in which it is a language can make managers of people and resources more effective at what they do.

For more about how a CS background has been useful to this officer, check out CS Degree to Army Officer, a blog entry that expands on his experiences.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

December 31, 2014 10:15 AM

Reinventing Education by Reinventing Explanation

One of the more important essays I read in 2014 was Michael Nielsen's Reinventing Explanation. In it, Nielsen explores how we might design media that help us explain scientific ideas better than we are able with our existing tools.

... it's worth taking non-traditional media seriously not just as a vehicle for popularization or education, which is how they are often viewed, but as an opportunity for explanations which can be, in important ways, deeper.

This essay struck me deep. Nielsen wants us to consider how we might take what we have learned using non-traditional media to popularize and educate and use it to think about how to explain more deeply. I think that learning how to use non-traditional media to explain more deeply will help us change the way we teach and learn.

In too many cases, new technologies are used merely as substitutes for old technology. The web has led to an explosion of instructional video aimed at all levels of learners. No matter how valuable these videos are, most merely replace reading a textbook or a paper. But computational technology enables us to change the task at hand and even redefine what we do. Alan Kay has been telling this story for decades, pointing us to the work of Ivan Sutherland and many others from the early days of computing.

Nielsen points to Bret Victor as an example of someone trying to develop tools that redefine how we think. As Victor himself says, he is following in the grand tradition of Kay, Sutherland, et al. Victor's An Ill-Advised Personal Note about "Media for Thinking the Unthinkable" is an especially direct telling of his story.

Vi Hart is another. Consider her recent Parable of the Polygons, created with Nicky Case, which explains dynamically how local choices and create systemic bias. This simulation uses computation to help people think differently about an idea they might not understand as viscerally from a traditional explanation. Hart has a long body of working using visualization to explain differently, and the introduction of computing extends the depth of her approach.

Over the last few weeks, I have felt myself being pulled by Nielsen's essay and the example of people such as Victor and Hart to think more about how we might design media that help us to teach and explain scientific ideas more deeply. Reinventing explanation might help us reinvent education in a way that actually matters. I don't have a research agenda yet, but looking again at Victor's work is a start.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 28, 2014 11:12 AM

A Little CS Would Help a Lot of College Grads

I would love to see more CS majors, but not everyone should major in CS. I do think that most university students could benefit from learning a little programming. There are plenty of jobs not only for CS and math grads, but also for other majors who have CS and math skills:

"If you're an anthropology major and you want to get a marketing job, well, guess what? The toughest marketing jobs to fill require SQL skills," Sigelman says. "If you can ... along the peripheries of your academic program accrue some strong quantitative skills, you'll still have the advantage [in the job market]." Likewise, some legal occupations (such as intellectual property law) and maintenance and repair jobs stay open for long periods of time, according to the Brookings report, if they require particular STEM skills.

There is much noise these days about the importance of STEM, both for educated citizens and for jobs, jobs, jobs. STEM isn't an especially cohesive category, though, as the quoted Vox article reminds us, and even when we look just at economic opportunity, it misleads. We don't need more college science graduates from every STEM discipline. We do need more people with the math and CS skills that now pervade the workplace, regardless of discipline. As Kurtzleben says in the article, "... characterizing these skill shortages as a broad STEM crisis is misleading to students, and has distorted the policy debate."


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 27, 2014 8:47 AM

Let's Not Forget: CS 1 Is Hard For Most Students

... software is hard. It's harder than
anything else I've ever had to do.
-- Donald Knuth

As students were leaving my final CS 1 lab session of the semester, I overheard two talking about their future plans. One student mentioned that he was changing his major to actuarial science. I thought, wow, that's a tough major. How is a student who is struggling with basic programming going to succeed there?

When I checked on his grades, though, I found that he was doing fine in my course, about average. I also remembered that he had enjoyed best the programming exercises that computed terms of infinite arithmetic series and other crazy mathematical values that his classmates often found impenetrable. Maybe actuarial science, even with some hard math, will be a good fit for him.

It really shouldn't surprise us that some students try computer science and decide to major in something else, even something that looks hard to most people. Teaching CS 1 again this semester after a long break reminded me just how much we expect from the students in our introductory course:

  • Details. Lots and lots of details. Syntax. Grammar. Vocabulary, both in a programming language and about programming more generally. Tools for writing, editing, compiling, and running programs.

  • Experimentation. Students have to design and execute experiments in order to figure out how language constructs work and to debug the programs they write. Much of what they learn is by trial and error, and most students have not yet developed skills for doing that in a controlled fashion.

  • Design. Students have to decompose problems and combine parts into wholes. They have to name things. They have to connect the names they see with ideas from class, the text, and their own experience.

  • Abstraction. Part of the challenge in design comes from abstraction, but abstract ideas are everywhere in learning about CS and how to program. Variables, choices, loops and recursion, functions and arguments and scope, ... all come not just as concrete forms but also as theoretical notions. These notions can sometimes be connected to the students' experience of the physical world, but the computing ideas are often just different enough to disorient the student. Other CS abstractions are so different as to appear unique.

In a single course, we expect students to perform tasks in all three of these modes, while mastering a heavy load of details. We expect them to learn by deduction, induction, and abduction, covering many abstract ideas and many concrete details. Many disciplines have challenging first courses, but CS 1 requires an unusual breadth of intellectual tools.

Yes, we can improve our students' experience with careful pedagogy. Over the last few decades we've seen many strong efforts. And yes, we can help students through the process with structural support, emotional support, and empathy. In the end, though, we must keep this in mind: CS 1 is going to be a challenge for most students. For many, the rewards will be worth the struggle, but that doesn't mean it won't take work, patience, and persistence along the way -- by both the students and the teachers.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 18, 2014 2:33 PM

Technical Problems Are The Easy Ones

Perhaps amid the daily tribulations of a software project, Steven Baker writes

Oy. A moving goal line, with a skeleton crew, on a shoestring budget. Technical problems are the easy ones.

And here we all sit complaining about monads and Java web frameworks...

My big project this semester has not been developing software but teaching beginners to develop software, in our intro course. There is more to Intro than programming, but for many students the tasks of learning a language and trying to write programs comes to dominate most everything else. More on that soon.

Yet even with this different sort of project, I feel much as Baker does. Freshmen have a lot of habits to learn and un-learn, habits that go well beyond how they do Python. My course competes with several others for the students' attention, not to mention with their jobs and their lives outside of school. They come to class with a lifetime of experience and knowledge, as well some surprising gaps in what they know. A few are a little scared by college, and many worry that CS won't be a good fit for them.

The technical problems really are the easy ones.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 02, 2014 2:53 PM

Other People's Best Interests

Yesterday I read:

It's hard for me to figure out people voting against their own self-interests.

I'm not linking to the source, because it wouldn't be fair to single the speaker out, especially when so many other things in the article are spot-on. Besides, I hear many different people express this sentiment from time time, people of various political backgrounds and cultural experiences. It seems a natural human reaction when things don't turn out the way we think they should.

Here is something I've learned from teaching and from working with teams doing research and writing software:

If you find yourself often thinking that people aren't acting in their own self-interests, maybe you don't know what their interests are.

It certainly may be true that people are not acting in what you think is their own self-interest. But it's rather presumptuous to think that you other people's best interest better than they do.

Whenever I find myself in this position, I have some work to do. I need to get to know my students, or my colleagues, or my fellow citizens, better. In cases where it's really true, and I have strong reason to think they aren't acting in their own best interest, I have an opportunity to help them learn. This kind of conversation calls for great care, though, because often we are dealing with people's identities and most deeply-held believes.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Teaching and Learning

November 23, 2014 8:50 AM

Supply, Demand, and K-12 CS

When I meet with prospective students and their parents, we often end up discussing why most high schools don't teach computer science. I tell them that, when I started as a new prof here, about a quarter of incoming freshmen had taken a year of programming in high school, and many other students had had the opportunity to do so. My colleagues and I figured that this percentage would go way up, so we began to think about how we might structure our first-year courses when most or all students already knew how to program.

However, the percentage of incoming students with programming experience didn't go up. It went way down. These days, about 10% of our freshman know how to program when they start our intro course. Many of those learned what they know on their own. What happened, today's parents ask?

A lot of things happened, including the dot-com bubble, a drop in the supply of available teachers, a narrowing of the high school curriculum in many districts, and the introduction of high-stakes testing. I'm not sure how much each contributed to the change, or whether other factors may have played a bigger role. Whatever the causes, the result is that our intro course still expects no previous programming experience.

Yesterday, I saw a post by a K-12 teacher on the Racket users mailing list that illustrates the powerful pull of economics. He is leaving teaching for software development industry, though reluctantly. "The thing I will miss the most," he says, "is the enjoyment I get out of seeing youngsters' brains come to life." He also loves seeing them succeed in the careers that knowing how to program makes possible. But in that success lies the seed of his own career change:

Speaking of my students working in the field, I simply grew too tired of hearing about their salaries which, with a couple of years experience, was typically twice what I was earning with 25+ years of experience. Ultimately that just became too much to take.

He notes that college professors probably know the feeling, too. The pull must be much stronger on him and his colleagues, though; college CS professors are generally paid much better than K-12 teachers. A love of teaching can go only so far. At one level, we should probably be surprised that anyone who knows how to program well enough to teach thirteen- or seventeen-year-olds to do it stays in the schools. If not surprised, we should at least be deeply appreciative of the people who do.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

November 20, 2014 3:23 PM

When I Procrastinate, I Write Code

I procrastinated one day with my intro students in mind. This is the bedtime story I told them as a result. Yes, I know that I can write shorter Python code to do this. They are intro students, after all.

~~~~~

Once upon a time, a buddy of mine, Chad, sent out a tweet. Chad is a physics prof, and he was procrastinating. How many people would I need to have in class, he wondered, to have a 50-50 chance that my class roster will contain people whose last names start with every letter of the alphabet?

    Adams
    Brown
    Connor
    ...
    Young
    Zielinski

This is a lot like the old trivia about how we only need to have 23 people in the room to have a 50-50 chance that two people share a birthday. The math for calculating that is straightforward enough, once you know it. But last names are much more unevenly distributed across the alphabet than birthdays are across the days of the year. To do this right, we need to know rough percentages for each letter of the alphabet.

I can procrastinate, too. So I surfed over to the US Census Bureau, rummaged around for a while, and finally found a page on Frequently Occurring Surnames from the Census 2000. It provides a little summary information and then links to a couple of data files, including a spreadsheet of data on all surnames that occurred at least 100 times in the 2000 census. This should, I figure, cover enough of the US population to give us a reasonable picture of how peoples' last names are distributed across the alphabet. So I grabbed it.

(We live in a wonderful time. Between open government, open research, and open source projects, we have access to so much cool data!)

The spreadsheet has columns with these headers:

    name,rank,count,prop100k,cum_prop100k,      \
                    pctwhite,pctblack,pctapi,   \
                    pctaian,pct2prace,pcthispanic

The first and third columns are what we want. After thirteen weeks, we know how to do compute the percentages we need: Use the running total pattern to count the number of people whose name starts with 'a', 'b', ..., 'z', as well as how many people there are altogether. Then loop through our collection of letter counts and compute the percentages.

Now, how should we represent the data in our program? We need twenty-six counters for the letter counts, and one more for the overall total. We could make twenty-seven unique variables, but then our program would be so-o-o-o-o-o long, and tedious to write. We can do better.

For the letter counts, we might use a list, where slot 0 holds a's count, slot 1 holds b's count, and so one, through slot 25, which holds z's count. But then we would have to translate letters into slots, and back, which would make our code harder to write. It would also make our data harder to inspect directly.

    ----  ----  ----  ...  ----  ----  ----    slots in the list

0 1 2 ... 23 24 25 indices into the list

The downside of this approach is that lists are indexed by integer values, while we are working with letters. Python has another kind of data structure that solves just this problem, the dictionary. A dictionary maps keys onto values. The keys and values can be of just about any data type. What we want to do is map letters (characters) onto numbers of people (integers):

    ----  ----  ----  ...  ----  ----  ----    slots in the dictionary

'a' 'b' 'c' ... 'x' 'y' 'z' indices into the dictionary

With this new tool in hand, we are ready to solve our problem. First, we build a dictionary of counters, initialized to 0.

    count_all_names = 0
    total_names = {}
    for letter in 'abcdefghijklmnopqrstuvwxyz':
        total_names[letter] = 0

(Note two bits of syntax here. We use {} for dictionary literals, and we use the familiar [] for accessing entries in the dictionary.)

Next, we loop through the file and update the running total for corresponding letter, as well as the counter of all names.

    source = open('app_c.csv', 'r')
    for entry in source:
        field  = entry.split(',')        # split the line
        name   = field[0].lower()        # pull out lowercase name
        letter = name[0]                 # grab its first character
        count  = int( field[2] )         # pull out number of people
        total_names[letter] += count     # update letter counter
        count_all_names     += count     # update global counter
    source.close()

Finally, we print the letter → count pairs.

    for (letter, count_for_letter) in total_names.items():
        print(letter, '->', count_for_letter/count_all_names)

(Note the items method for dictionaries. It returns a collection of key/value tuples. Recall that tuples are simply immutable lists.)

We have converted the data file into the percentages we need.

    q -> 0.002206197888442366
    c -> 0.07694634659082318
    h -> 0.0726864447688946
    ...
    f -> 0.03450702533438715
    x -> 0.0002412718532764804
    k -> 0.03294646311104032

(The entries are not printed in alphabetical order. Can you find out why?)

I dumped the output to a text file and used Unix's built-in sort to create my final result. I tweet Chad, Here are your percentages. You do the math.

Hey, I'm a programmer. When I procrastinate, I write code.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

November 17, 2014 3:34 PM

Empathy for Teachers Doing Research

I appreciate that there are different kinds of university, with different foci. Still, I couldn't help but laugh while reading Doug McKee's Balancing Research and Teaching at an Elite University. After acknowledging that he would "like to see the bar for teaching raised at least a little bit", McKee reminds us why they can't raise it too far: Faculty who spend too much time on their teaching will produce lower-quality research, and this will cause the prestige of the university to suffer. But students should know what's going on:

I'd also like to see more honesty about the primacy of research in the promotion process. This would make the expectations of the undergraduates more reasonable and give them more empathy for the faculty (especially the junior faculty).

Reading this, I laughed out loud, as my first thought was, "You're kidding, right?" I'm all for more empathy for professors; teaching is a tough job. But I doubt "My teaching isn't very good because I spend all my time on research, and the university is happy with that." is going to elicit much sympathy.

Then again, I may be seeing the world from the perspective of a different sort of university. Many students are at elite universities in large part for the prestige of the school. Faculty research is what maintains the university's prestige. So maybe those students view subpar classroom teaching simply as part of the cost of doing business.

Then there was this:

Undergraduates may not always get a great teacher in the classroom, but they are always learning from someone at the cutting edge of their discipline, and there is no substitute for that.

Actually, there is. It's called good teaching. Being talked at by an all-research, no-teaching Nobel laureate for fifty minutes a day, three days a week, may score high on the prestige meter, but it won't teach you how to think. Then again, with high enough admission standards, perhaps the students can take care of themselves.

I don't mean to sound harsh. Many research professors are also excellent teachers, and in some courses, being at the forefront of the discipline's knowledge is a great advantage. And I do feel empathy for faculty who find themselves in a position where the reward structure effectively forces them to devote little or no time to their teaching. Teaching well takes time.

But let's also keep in mind that those same faculty members chose to work at an elite university and, unlike many of their their students, the faculty know what that means for the balance between research and teaching in their careers.

Over the last decade or two, funding for public universities in most states has fallen dramatically compared to the cost of instruction. I hope state legislatures eventually remember that great teaching takes time, and take that into account when they are allocating resources among the institutions that focus on research and those that focus on students.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 09, 2014 5:40 PM

Storytelling as a Source of Insight

Last week, someone tweeted a link to Learning to Learn, a decade-old blog entry by Chris Sells. His lesson is two-fold. To build a foundation of knowledge, he asks "why?" a lot. That is a good tactic in many settings.

He learned a method for gaining deeper insight by accident. He was tabbed to teach a five-day course. When he reached Day 5, he realized that he didn't know the material well enough -- he didn't know what to say. The only person who could take over for him was out of town. So he hyperventilated for a while, then decided to fake it.

That's when learning happened:

As I was giving the slides that morning on COM structured storage (a particularly nasty topic in COM), I found myself learning how it it worked as I told the story to my audience. All the studying and experimentation I'd done had given me the foundation, but not the insights. The insights I gained as I explained the technology. It was like fireworks. I went from through those topics in a state of bliss as all of what was really going on underneath exploded in my head. That moment taught me the value of teaching as a way of learning that I've used since. Now, I always explain stuff as a way to gain insight into whatever I'm struggling with, whether it's through speaking or writing. For me, and for lots of others, story telling is where real insight comes from.

I have been teaching long enough to know that it doesn't always go this smoothly. Often, when I don't know what to say, I don't do a very good job. Occasionally, I fail spectacularly. But it happens surprisingly often just as Sells describes. If we have a base of knowledge, the words come, and in explaining an idea for the first time -- or the hundredth -- we come to understand it in a new, deeper way. Sometimes, we write to learn. Other times, we teach.

We can help our students benefit from this phenomenon, too. Of course, we ask them to write. But we can go further with in-class activities in which students discuss topics and explain them to one another. Important cognitive processing happens when students explain a concept that doesn't happen when they study on our own.

I think the teach-to-learn phenomenon is at play in the "why?" tactic we use to learn in the first place. The answer to a why is an reason, an explanation. Asking "why?" is the beginning of telling stories to ourselves.

Story telling is, indeed, a source of deeper insight.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 07, 2014 2:12 PM

Three Students, Three Questions

In the lab, Student 1 asks a question about the loop variable on a Python for statement. My first thought is, "How can you not know that? We are in Week 11." I answer, he asks another question, and we talk some more. The conversation shows me that he has understood some ideas at a deeper level, but a little piece was missing. His question helped him build a more complete and accurate model of how programs work.

Before class, Student 2 asks a question about our current programming assignment. My first thought is, "Have you read the assignment? It answers your question." I answer, he asks another question, and we talk some more. The conversation shows me that he is thinking carefully about details of the assignment, but assignments at this level of detail are new to him. His question helped him learn a bit more about how to read a specification.

After class, Student 3 asks a question about our previous programming assignment. We had recently looked at my solution to the assignment and discussed design issues. "Your program is so clean and organized. My program is so ugly. How can I write better-looking programs?" He is already one of the better students in the course. We discuss the role of experience in writing clearly, and I explain that the best programs are often the result of revision and refactoring. They started out just good enough, and the author worked to make them better. The conversation shows me that he cares about the quality of his code, that elegance matters as much to him as correctness. His question keeps him moving along the path to becoming a good programmer.

Three students, three questions: all three are signs of good things to come. They also remind me that even questions which seem backward at first can point forward.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 01, 2014 3:27 PM

Passion is a Heavy Burden

Mark Guzdial blogged this morning about the challenge of turning business teachers into CS teachers. Where is the passion? he asks.

These days, I wince every time I hear word 'passion'. We apply it to so many things. We expect teachers to have passion for the courses they teach, students to have passion for the courses they take, and graduates to have passion for the jobs they do and the careers they build.

Passion is a heavy burden. In particular, I've seen it paralyze otherwise well-adjusted college students who think they need to try another major, because they don't feel a passion for the one they are currently studying. They don't realize that often passion comes later, after they master something, do it for a while, and come to appreciate it ways they could never imagine before. I'm sure some of these students become alumni who are discontent with their careers, because they don't feel passion.

I think requiring all CS teachers to have a passion for CS sets the bar too high. It's an unrealistic expectation of prospective teachers and of the programs that prepare them.

We can survive without passionate teachers. We should set our sights on more realistic and relevant goals:

  • Teachers should be curious. They should have a desire to learn new things.
  • Teachers should be professional. They should have a desire to do their jobs well.
  • Teachers should be competent. They should be capable of doing their jobs well.

Curiosity is so much more important than passion for most people in most contexts. If you are curious, you will like encountering new ideas and learning new skills. That enjoyment will carry you a long way. It may even help you find your passion.

Perhaps we should set similarly realistic goals for our students, too. If they are curious, professional, and competent, they will most likely be successful -- and content, if not happy. We could all do worse.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

October 30, 2014 3:11 PM

Why It's Important to Pique Your Students' Curiosity

Because they will remember better! This Science Alert summarizes research which shows that "our minds actively reward us for seeking out the information we're most interested in". When people attain something they find intrinsically valuable, their brains release a dose of dopamine, which increases the learner's energy level and seems to enhance the connections between cells that contribute to remembering.

What I found most interesting, though, was this unexpected connection, as reported in this research paper from the journal Neuron (emphasis added):

People find it easier to learn about topics that interest them, but little is known about the mechanisms by which intrinsic motivational states affect learning. We used functional magnetic resonance imaging to investigate how curiosity (intrinsic motivation to learn) influences memory. In both immediate and one-day-delayed memory tests, participants showed improved memory for information that they were curious about and for incidental material learned during states of high curiosity.

This study suggests that when people are curious about something, they remember better what they learn about it, but they also remember better other information they come into contact with at the same time.

So, it might be worth opening class with a question or challenge that excites students and primes their curiosity, even if it is only tangentially related to the course material. In the process of learning the material they find interesting, they may learn better the material we find interesting.

When trying to teach students about loops or variables or functional decomposition, it is worth the effort to find and cultivate good problems to apply the concept to. If the problem piques the students' interest, it may increase their brains' receptiveness to learning the computational ideas.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 17, 2014 3:05 PM

Assorted Quotes

... on how the world evolves.

On the evolution of education in the Age of the Web. Tyler Cowen, in Average Is Over, via The Atlantic:

It will become increasingly apparent how much of current education is driven by human weakness, namely the inability of most students to simply sit down and try to learn something on their own.

I'm curious whether we'll ever see a significant change in the number of students who can and do take the reins for themselves.

On the evolution of the Web. Jon Udell, in A Web of Agreements and Disagreements:

The web works as well as it does because we mostly agree on a set of tools and practices. But it evolves when we disagree, try different approaches, and test them against one another in a marketplace of ideas. Citizens of a web-literate planet should appreciate both the agreements and the disagreements.

Some disagreements are easier to appreciate after they fade into history.

On the evolution of software. Nat Pryce on the Twitter, via The Problematic Culture of "Worse is Better":

Eventually a software project becomes a small amount of useful logic hidden among code that copies data between incompatible JSON libraries

Not all citizens of a web-literate planet appreciate disagreements between JSON libraries. Or Ruby gems.

On the evolution of start-ups. Rands, in The Old Guard:

... when [the Old Guard] say, "It feels off..." what they are poorly articulating is, "This process that you're building does not support one (or more) of the key values of the company."

I suspect the presence of incompatible JSON libraries means that our software no longer supports the key values of our company.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Managing and Leading, Software Development, Teaching and Learning

October 16, 2014 3:54 PM

For Programmers, There Is No "Normal Person" Feeling

I see this in the lab every week. One minute, my students sit peering at their monitors, their heads buried in their hands. They can't do anything right. The next minute, I hear shouts of exultation and turn to see them, arms thrust in the air, celebrating their latest victory over the Gods of Programming. Moments later I look up and see their heads again in their hands. They are despondent. "When will this madness end?"

Last week, I ran across a tweet from Christina Cacioppo that expresses nicely a feeling that has been vexing so many of my intro CS students this semester:

I still find programming odd, in part, because I'm either amazed by how brilliant or how idiotic I am. There's no normal-person feeling.

Christina is no beginner, and neither am I. Yet we know this feeling well. Most programmers do, because it's a natural part of tackling problems that challenge us. If we didn't bounce between feeling puzzlement and exultation, we wouldn't be tackling hard-enough problems.

What seems strange to my students, and even to programmers with years of experience, is that there doesn't seem to be a middle ground. It's up or down. The only time we feel like normal people is when we aren't programming at all. (Even then, I don't have many normal-person feelings, but that's probably just me.)

I've always been comfortable with this bipolarity, which is part of why I have always felt comfortable as a programmer. I don't know how much of this comfort is natural inclination -- a personality trait -- and how much of it is learned attitude. I am sure it's a mixture of both. I've always liked solving puzzles, which inspired me to struggle with them, which helped me get better struggling with them.

Part of the job in teaching beginners to program is to convince them that this is a habit they can learn. Whatever their natural inclination, persistence and practice will help them develop the stamina they need to stick with hard problems and the emotional balance they need to handle the oscillations between exultation and despondency.

I try to help my students see that persistence and practice are the answer to most questions involving missing skills or bad habits. A big part of helping them this is coaching and cheerleading, not teaching programming language syntax and computational concepts. Coaching and cheerleading are not always tasks that come naturally to computer science PhDs, who are often most comfortable with syntax and abstractions. As a result, many CS profs are uncomfortable performing them, even when that's what our students need most. How do we get better at performing them? Persistence and practice.

The "no normal-person feeling" feature of programming is an instance of a more general feature of doing science. Martin Schwartz, a microbiologist at the University of Virginia, wrote a marvelous one-page article called The importance of stupidity in scientific research that discusses this element of being a scientist. Here's a representative sentence:

One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time.

Scientists get used to this feeling. My students can, too. I already see the resilience growing in many of them. After the moment of exultation passes following their latest conquest, they dive into the next task. I see a gleam in their eyes as they realize they have no idea what to do. It's time to bury their heads in their hands and think.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 15, 2014 3:54 PM

Maybe We Just Need to Teach Better

Maybe We Just Need to Teach Better

A couple of weeks ago, I wrote Skills We Can Learn in response to a thread on the SIGCSE mailing list. Mark Guzdial has now written a series of posts in response to that thread, most recently Teaching Computer Science Better To Get Better Results. Here is one of the key paragraphs in his latest piece:

I watch my children taking CS classes, along with English, Chemistry, Physics, and Biology classes. In the CS classes, they code. In the other classes, they do on-line interactive exercises, they write papers, they use simulations, they solve problems by-hand. Back in CS, the only activity is coding with feedback. If we only have one technique for teaching, we shouldn't be surprised if it doesn't always work.

Mark then offers a reasonable hypothesis: We get poor results because we use ineffective teaching methods.

That's worthy of a new maxim of the sort found in my previous post: If things aren't going well in my course, it's probably my fault. Mark's hypothesis sounds more professional.

A skeptic might say that learning to program is like learning to speak a new human language, and when we learn new human languages we spend most of our time reading, writing, and speaking, and getting feedback from these activities. In an introductory programming course, the programming exercises are where students read, write, and get feedback. Isn't that enough?

For some students, yes, but not for all. This is also true in introductory foreign language courses, which is why teachers in those courses usually include games and other activities to engage the students and provide different kinds of feedback. Many of us do more than just programming exercises in computer science courses, too. In courses with theory and analysis, we give homework that asks students to solve problems, compute results, or give proofs for assertions about computation.

In my algorithms course, I open most days with a game. Students play the game for a while, and then we discuss strategies for playing the game well. I choose games whose playing strategies illustrate some algorithm design technique we are studying. This is a lot more fun than yet another Design an algorithm to... exercise. Some students seem to understand the ideas better, or at least differently, when they experience the ideas in a wider context.

I'm teaching our intro course right now, and over the last few weeks I have come to appreciate the paucity of different teaching techniques and methods used by a typical textbook. This is my first time to teach the course in ten years, and I'm creating a lot of my own materials from scratch. The quality and diversity of the materials are limited by my time and recent experience, with the result being... a lot of reading and writing of code.

What of the other kinds of activities that Mark mentions? Some code reading can be turned into problems that the students solve by hand. I have tried a couple of debugging exercises that students seemed to find useful. I'm only now beginning to see the ways in which those exercises succeeded and failed, as the students take on bigger tasks.

I can imagine all sorts of on-line interactive exercises and simulations that would help in this course. In particular, a visual simulator for various types of loops could help students see a program's repetitive behavior more immediately than watching the output of a simple program. Many of my students would likely benefit from a Bret Victor-like interactive document that exposes the internal working of, say, a for loop. Still others could use assistance with even simpler concepts, such as sequences of statements, assignment to variables, and choices.

In any case, I second Mark's calls to action. We need to find more and better methods for teaching CS topics. We need to find better ways to make proven methods available to CS instructors. Most importantly, we need to expect more of ourselves and demand more from our profession.

When things go poorly in my classroom, it's usually my fault.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 14, 2014 2:22 PM

A Handy Guideline for Teaching Beginners

I was putting together a short exercise for my students to do in class today. "Maybe this is too difficult. Should I make it easier?"

If you ever ask yourself this question, there is only one answer.

When in doubt, make it simpler.

On the days when I trust its wisdom and worry that I've made things too easy to be interesting, I am usually surprised by how well things go. On the days when I get lazy, or a little cocky, and stick with something that caused me to wonder, there is usually very little surprise.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 02, 2014 3:46 PM

Skills We Can Learn

In a thread on motivating students on the SIGCSE mailing list, a longtime CS prof and textbook author wrote:

Over the years, I have come to believe that those of us who can become successful programmers have different internal wiring than most in the population. We know you need problem solving, mathematical, and intellectual skills but beyond that you need to be persistent, diligent, patient, and willing to deal with failure and learn from it.

These are necessary skills, indeed. Many of our students come to us without these skills and struggle to learn how to think like a computer scientist. And without persistence, diligence, patience, and a willingness to deal with failure and learn from it, anyone will likely have a difficult time learning to program.

Over time, it's natural to begin to think that these attributes are prerequisites -- things a person must have before he or she can learn to write programs. But I think that's wrong.

As someone else pointed out in the thread, too many people believe that to succeed in certain disciplines, one must be gifted, to possess an inherent talent for doing that kind of thing. Science, math, and computer science fit firmly in that set of disciplines for most people. Carol Dweck has shown that having such a "fixed" mindset of this sort prevents many people from sticking with these disciplines when they hit challenges, or even trying to learn them in the first place.

The attitude expressed in the quote above is counterproductive for teachers, whose job it is to help students learn things even when the students don't think they can.

When I talk to my students, I acknowledge that, to succeed in CS, you need to be persistent, diligent, patient, and willing to deal with failure and learn from it. But I approach these attributes from a growth mindset:

Persistence, diligence, patience, and willingness to learn from failure are habits anyone can develop with practice. Students can develop these habits regardless of their natural gifts or their previous education.

Aristotle said that excellence is not an act, but a habit. So are most of the attributes we need to succeed in CS. They are habits, not traits we are born with or actions we take.

Donald Knuth once said that only about 2 per cent of the population "resonates" with programming the way he does. That may be true. But even if most of us will never be part of Knuth's 2%, we can all develop the habits we need to program at a basic level. And a lot more than 2% are capable of building successful careers in the discipline.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

September 29, 2014 1:54 PM

"Classes I Control"

I saw a commercial recently for one of those on-line schools no one has ever heard of. In it, a non-traditional student with substantial job experience said, "At [On-Line U.], I can take classes I control."

I understand a desire for control, especially given the circumstances in which so many people go to university now. Late twenties or older, a family, a house, bills to pay. Under such conditions, school becomes a mercenary activity: get in, get a few useful skills and a credential, get out. Maximize ROI; minimize expenses.

In comparison, my years studying at a university were a luxury. I went to college straight out of high school, in a time with low tuition and even reasonable room and board. I was lucky to have a generous scholarship that defrayed my costs. But even my friends without scholarships seemed more relaxed about paying for school than students these days. It wasn't because Mom and Dad were picking up the tab, either; most of my friends paid their own way.

The cost was reasonable and as a result, perhaps, students of my era didn't feel quite the same need to control all of their classes. That is just as well, because we didn't have much control, nor much bargaining power to change how our professors worked.

What a fortunate powerlessness that was, though. In most courses, I encountered creative, intelligent professors. Once a year or so, I would walk into a course with rather pedestrian goals only to find that the professor had something different in mind, something unimagined, something wonderful. If naive, twenty-year-old Eugene had had control of all his courses, he would likely have missed out on a few experiences that changed his life.

What a great luxury it was to surrender control for eleven weeks and be surprised by new knowledge, ideas, and possibilities -- and by the professors who made the effort to take me there.

I know was lucky in a lot of ways, and for that I am thankful. I hope that our inability or unwillingness to keep public university affordable doesn't have as an unintended casualty the wonderful surprises that can happen in our courses.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

September 28, 2014 10:21 AM

Modeling the Unseen

This passage comes from an article at LessWrong about making beliefs "pay rent" by holding them accountable for the experiences they entail:

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

Our students bring this double-edged sword to the classroom with them.

Students seem able to learn important ideas even when teachers present the ideas poorly, or inconsistently, or confusingly. I probably don't want to know how often I depend on this good fortune when I teach...

At the same time, students can build models that are flatly untrue. This isn't surprising. When we draw conclusions from incomplete evidence and examples, we will occasionally go astray. The search space in which students work is vast; it is remarkable that they don't go astray more often.

Teachers experience both edges of the sword. Students model the unseen and, for the most part, the model the build is invisible to us. When is it an accurate model?

One of the biggest challenges for teachers is to bring both of the unseens closer to the surface.

  • When we make what we want students to learn more visible, they are able to form more accurate models more quickly.
  • When we make the students' models more visible, we are able to diagnose inaccurate models and help students improve their models more quickly.

The second of these is what makes face-to-face instruction and one-on-one interaction so powerful. We bring students' models to the surface most effectively in the ways we discuss ideas with them. Our greatest opportunities to discuss come from asking students to build something and then discussing with them both the product and the process of making it.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 23, 2014 4:37 PM

The Obstacles in the Way of Teaching More Students to Program

All students should learn to program? Not so fast, says Larry Cuban in this Washington Post blog entry. History, including the Logo movement, illustrates several ways in which such a requirement can fail. I've discussed Cuban's article with a couple of colleagues, and all are skeptical. They acknowledge that he raises important issues, but in the end they offer a "yeah, but...". It is easy to imagine that things are different now, and the result will be similarly different.

I am willing to believe that things may be different this time. They always are. I've written favorably here in the past of the value of more students learning to program, but I've also been skeptical of requiring it. Student motivations change when they "have to take that class". And where will all the teachers come from?

In any case, it is wise to be alert to how efforts to increase the reach of programming instruction have fared. Cuban reminds us of some of the risks. One line in his article expresses what is, to my mind, the biggest challenge facing this effort:

Traditional schools adapt reforms to meet institutional needs.

Our K-12 school system is a big, complex organism (actually, fifty-one of them). It tends to keep moving in the direction of its own inertia. If a proposed reform fits its needs, the system may well adopt it. If it doesn't, but external forces push the new idea onto system, the idea is adapted -- assimilated into what the institution already wants to be, not what the reform actually promises.

We see this in the university all the time, too. Consider accountability measures such as student outcomes assessment. Many schools have adopted the language of SOA, but rarely do faculty and programs change all that much how they behave. They just find ways to generate reports that keep the external pressures at bay. The university and its faculty may well care about accountability, but they tend to keep on doing it the way they want to do it.

So, how can we maximize the possibility of substantive change in the effort to teach more students how to program, and not simply create a new "initiative" with frequent mentions in brochures and annual reports? Mark Guzdial has been pointing us in the right direction. Perhaps the most effective way to change K-12 schools is to change the teachers we send into the schools. We teach more people to be computing teachers, or prepare more teachers in the traditional subjects to teach computing. We prepare them to recognize opportunities to introduce computing into their courses and curricula.

In this sense, universities have an irreplaceable role to play in the revolution. We teach the teachers.

Big companies can fund programs such as code.org and help us reach younger students directly. But that isn't enough. Google's CS4HS program has been invaluable in helping universities reach current K-12 teachers, but they are a small percentage of the installed base of teachers. In our schools of education, we can reach every future teacher -- if we all work together within and across university boundaries.

Of course, this creates a challenge at the meta-level. Universities are big, complex organisms, too. They tends to keep moving in the direction of their own inertia. Simply pushing the idea of programming instruction onto system from the outside is more likely to result in harmless assimilation than in substantive change. We are back to Cuban's square one.

Still, against all these forces, many people are working to make a change. Perhaps this time will be different after all.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 19, 2014 3:34 PM

Ask Yourself, "What is the Pattern?"

I ran across this paragraph in an essay about things you really need to learn in college:

Indeed, you should view the study of mathematics, history, science, and mechanics as the study of archetypes, basic patterns that you will recognize over and over. But this means that, when you study these disciplines, you should be asking, "what is the pattern" (and not merely "what are the facts"). And asking this question will actually make these disciplines easier to learn.

Even in our intro course, I try to help students develop this habit. Rather than spending all of our time looking at syntax and a laundry list of language features, I am introducing them to some of the most basic code patterns, structures they will encounter repeatedly as they solve problems at this level. In week one came Input-Process-Output. Then after learning basic control structures, we encountered guarded actions, range tests, running totals, sentinel loops, and "loop and a half". We encounter these patterns in the process of solving problems.

While they are quite low-level, they are not merely idioms. They are patterns every bit as much as patterns at the level of the Gang of Four or PoSA. They solve common problems, recur in many forms, and are subject to trade-offs that depend on the specific problem instance.

They compose nicely to create larger programs. One of my goals for next week is to have students solve new problems that allow them to assemble programs from ideas they have already seen. No new syntax or language features, just new problems.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

September 04, 2014 3:32 PM

Language Isn't Just for Experts

Stephen Ramsey wrote The Mythical Man-Finger, in defense of an earlier piece on the virtues of the command line. The gist of his argument is this:

... the idea that language is for power users and pictures and index fingers are for those poor besotted fools who just want toast in the morning is an extremely retrograde idea from which we should strive to emancipate ourselves.

Ramsay is an English professor who works in digital humanities. From the writings posted on his web site, it seems that he spends nearly as much time teaching and doing computing these days as he spends on the humanities. This opens him to objections from his colleagues, some of whom minimize the relevance of his perspective for other humanists by reminding him that he is a geek. He is one of those experts who can't see past his own expertise. We see this sort of rhetorical move in tech world all the time.

I think the case is quite the opposite. Ramsay is an expert on language. He knows that language is powerful, that language is more powerful than the alternatives in many contexts. When we hide language from our users, we limit them. Other tools can optimize for a small set of particular use cases, but they generally make it harder to step outside of those lines drawn by the creator of the tools: to combine tasks in novel ways, to extend them, to integrate them with other tools.

Many of my intro students are just beginning to see what knowing a programming language can mean. Giving someone language is one of the best ways to empower them, and also a great way to help them even see what is possible.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 03, 2014 4:13 PM

My PhD Advisor Takes on a New Challenge

Just this week I learned that Jon Sticklen, my PhD advisor, has moved to Michigan Tech to chair its Department of Engineering Fundamentals. As I recall, Michigan Tech focuses much of its effort on undergraduate engineering education. This makes it a good fit for Jon, who has been working on projects in engineering education at Michigan State for a number of years now, with some success. I wish him and them well.

By the way, if you can handle a strong winter, then Tech can be a great place to live. The upper peninsula of Michigan is stunning!


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

September 01, 2014 3:17 PM

Back to the Beginning

August was quiet on my blog only because it was anything but quiet elsewhere. The department office had its usual August business plus a couple of new challenges thrown in. I spent one day on jury duty, one day in retreat with fellow department heads, and one day on a long bike ride. My older daughter was home for a few days before heading back to college for her senior year, and my younger daughter was preparing to leave for college for the first time.

On top of that, I am teaching our intro course this fall. I have not taught intro since the fall of 2006, when I introduced media computation into our Java track. Before that we have to go back to the late 1990s to find me in front of a classroom full of students embarking on their first programming experience. I'm excited and a little apprehensive. There is great opportunity in helping students lay the foundation for the rest of their CS coursework. But there is also great risk. For the most part, these students have never worked with a floating-point number or a variable or an assignment statement, at least in the context of a programming language. How badly might I lead them astray?

We now teach Python in this track. I could have used media comp as our organizing theme again, but the instructors who have been teaching in this track for the last few years have moved to a data manipulation them, using a textbook by Bill Punch and Rich Enbody. I decided to do the same. There is no sense in me disrupting the flow of the track, especially with the likelihood that I won't teach the course again in the spring. (In the interest of full disclosure, I told my students that Bill was one of my mentors in grad school at Michigan State.)

The first week of class went well. As expected, the students reminded me how different teaching intro can be. There are so many ways for novices to interpret so many things... Type a simple expression or two into the Python shell, ask them what they think,and find out for yourself!

Every teacher knows that the first day of class shatters any illusion we might have of teaching the perfect course. Such illusions are more common for me when I teach a course for the first time, or the first time in a long while. The upside of shattering the illusion is that I can move on to the daily business of getting better.

At the end of our first lab session, I walked with one student as he was leaving the room. He had asked a few questions during the exercise. I asked how he felt, now that he had completed successfully his first lab as a CS major. "I am excited and scared," he said. "Scared has been keeping me away from computer science, but I know I have to try. I'm excited."

I know exactly he how feels. I'm apprehensive, not in fearing failure or catastrophe, but in being aware that I must remain vigilant. When we teach, we affect other peoples' lives. Teaching a first course in the discipline, introducing students to a new set of ideas and way of thinking, is a multiplier on this effect. I owe it to these students to help them overcome their fears and realize their excitement.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 19, 2014 1:49 PM

The Universal Justification

Because we need it to tell better stories.

Ethan Zuckerman says that this is the reason people are addicted to big data, quoting Macej Ceglowski's wonderful The Internet with a Human Face But if you look deep enough, this is the reason that most of us do so many of the things we do. We want to tell better stories.

As I teach our intro course this fall, I am going to ask myself occasionally, "How does what we are learning today help my students tell a better story?" I'm curious to see how that changes the way I think about the things we do in class.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 30, 2014 1:04 PM

The Reach of a MOOC

Doug Schmidt

Distributed computing, OOP, and patterns guru Doug Schmidt recently posted on Facebook a statistical recap of his summer MOOC on pattern-oriented software architectures for mobile devices and cloud computing. Approximately 80,000 people signed up for the course, 40,000 people showed up, and 4,000 people completed the course in a meaningful way (taking automated quizzes and possibly doing peer-graded programming assignments). So that's either a 5% completion rate for the course, or 10%, depending on which standard you prefer.

A lot of folks complain that the reach of MOOCs is muted by their notoriously low completing rates. But Schmidt puts the numbers into perspective:

... I've taught ~1,000 students at Wash U., UC Irvine, and Vanderbilt in the past 20 years, so regardless of the completion *rate* the opportunity to reach > 4,000 students and teach them about patterns and frameworks for concurrent programming in Java and Android is pretty cool!

Schmidt has a lot of knowledge and experience to share. His MOOC shared it with an awful lot of people in one offering.

My department has not attempted a "massive" on-line course yet, though a few of our faculty did take a small first step last month. As Mark Guzdial lamented a few months ago, Google required that all of its CS4HS summer workshops be offered on-line. A few of my colleagues, led by Ben Schafer, have taught CS4HS workshops for the last five years, reaching in the ballpark of 20-25 teachers from northeast Iowa in each of the first four. As reported in the department's own Facebook post, this year the course enrolled 245 teachers from thirty-nine states and Puerto Rico. I haven't seen final numbers for the workshop yet, but just after it ended Ben reported good participation and positive evaluations from the teachers in the course.

I don't know yet what I think about MOOCs. The trade-offs are numerous, and most of my teaching experience is in smaller, more intimate settings that thrive on individual relationships with students. But I can't deny the potential reach for MOOCs to reach so many people and to provide access to valuable courses to people who otherwise likely could never attend them.

On a lighter note, the first comment in response to Schmidt's Facebook post is my favorite in a while:

I just loaded the dishwasher. Our jobs are so similar! Crazy, eh?

Don't worry, Kristie. Sometimes, I look at all the amazing thing Doug does and feel exactly the same.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 17, 2014 10:00 AM

A New Commandment

... I give unto you:

Our first reaction to any comrade, any other person passionate about and interested in building things with computers, any human crazy and masochistic enough to try to expand the capabilities of these absurd machines, should be empathy and love.

Courtesy of Britt Butler.

I hope to impart such empathy and love to my intro students this fall. Love to program, and be part of a community that loves and learns together.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 16, 2014 2:11 PM

Burn All Your Sermons

Marketers and bridge players have their Rules of Seven. Teachers and preachers might, too, if they believe this old saw:

Once in seven years I burn all my sermons; for it is a shame if I cannot write better sermons now than I did seven years ago.

I don't have many courses in which I lecture uninterrupted for long periods of time. Most of my courses are a mixture of short lectures, student exercises, and other activities that explore or build upon whatever we are studying. Even when I have a set of materials I really like, which have been successful for me and my students in the past, I am forever reinventing them, tweaking and improving as we move through the course. This is in the same spirit as the rule of seven: surely I can make something better since the last time I taught the course.

Having a complete set of materials for a course to start from can be a great comfort. It can also be a straitjacket. The high-level structure of a course design limits how we think about the essential goals and topics of the course. The low-level structure generally optimizes for specific transitions and connections, which limits how easily we can swap in new examples and exercises.

Even as an inveterate tinkerer, I occasionally desire to break out of the straitjacket of old material and make a fresh start. Burn it all and start over. Freedom! What I need to remember will come back to me.

The adage quoted above tells us to do this regularly even if we don't feel the urge. The world changes around us. Our understanding grows. Our skills as a writer and storyteller grow. We can do better.

Of course, starting over requires time. It's a lot quicker to prep a course by pulling a prepped course out of an old directory of courses and cleaning it up around the edges. When I decide to redesign a course from bottom up, I usually have to set aside part of a summer to allow for long hours writing from scratch. This is a cost you have to take into account any time you create a new course.

Being in computer science makes it easier to force ourselves to start from scratch. While many of the principles of CS remain the same across decades, the practices and details of the discipline change all the time. And whatever we want to say about timeless principles, the undergrads in my courses care deeply about having some currency when they graduate.

In Fall 2006, I taught our intro course. The course used Java, which was the first language in our curriculum at that time. Before that, the last time I had taught the course, our first language was Pascal. I had to teach an entirely new course, even though many of the principles of programming I wanted to teach were the same.

I'm teaching our intro course again this fall for the first time since 2006. Python is the language of choice now. I suppose I could dress my old Java course in a Python suit, but that would not serve my students well. It also wouldn't do justice to the important ideas of the course, or Python. Add to this that I am a different -- and I hope better -- teacher and programmer now than I was eight years ago, and I have all the reasons I need to design a new course.

So, I am getting busy. Burn all the sermons.

Of course, we should approach the seven-year advice with some caution. The above passage is often attributed to theologian John Wesley. And indeed he did write it. However, as is so often the case, it has been taken out of context. This is what Wesley actually wrote in his journal:

Tuesday, September 1.--I went to Tiverton. I was musing here on what I heard a good man say long since--"Once in seven years I burn all my sermons; for it is a shame if I cannot write better sermons now than I could seven years ago." Whatever others can do, I really cannot. I cannot write a better sermon on the Good Steward than I did seven years ago; I cannot write a better on the Great Assize than I did twenty years ago; I cannot write a better on the Use of Money, than I did nearly thirty years ago; nay, I know not that I can write a better on the Circumcision of the Heart than I did five-and-forty years ago. Perhaps, indeed, I may have read five or six hundred books more than I had then, and may know a little more history, or natural philosophy, than I did; but I am not sensible that this has made any essential addition to my knowledge in divinity. Forty years ago I knew and preached every Christian doctrine which I preach now.

Note that Wesley attributes the passage to someone else -- and then proceeds to deny its validity in his own preaching! We may choose to adopt the Rule of Seven in our teaching, but we cannot do so with Wesley as our prophet.

I'll stick with my longstanding practice of building on proven material when that seems best, and starting from scratch whenever the freedom to tell a new story outweighs the value of what has worked for me and my students in the past.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 15, 2014 3:19 PM

Ambiguous Questions and Wrong Answers

Pianist James Boyk recently shared a story about mathematician Andrew Gleason with me. Boyk had studied a bit with Gleason, who was known for his work in cryptography and in helping to solve Hilbert's fifth problem. Gleason also had a strong interest in early math education. Once, after observing first-grade math teaching for some weeks, Gleason said:

I never saw a kid give a wrong answer. I heard a lot of ambiguous or erroneous questions, but never a wrong answer.

As Boyk told me, there's a lesson in this attitude for everyone. So often in my own courses, I start to mark a student's answer as incorrect, step back, and realize that the question itself was ambiguous. The student had given a correct answer -- only to a question different than the one I had in my mind.

Not all my questions are ambiguous or erroneous, of course. Sometimes the student does give an incorrect answer. For a while now, I've been trying to retrain my mind to think in terms of incorrect answers rather than wrong answers. Yes, these words are synonyms, but their connotations differ. The meaning of "incorrect" is usually limited to the objective sense of being not in accordance with fact or standards. "wrong" has a wider meaning that also can include being immoral or dishonest. Those words seem a bit too judgmental to be applied to my students' sorting algorithms and Scheme programs.

In the case of erroneous answers, I find I'm usually more effective if I focus on the factual incorrectness of an answer. What misconception or missing piece of knowledge led the student to this conclusion? How can I help the student recognize this flaw in the reasoning and reason more accurately in the future?

That seems to be the big lesson of Gleason's comment: to keep the teacher's focus on how a student's answer is the correct answer, given what he or she knows at a given point in time. The teacher's job is to ask better questions and lead students to a better state of knowing and doing.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 11, 2014 2:12 PM

Unclear on the Concept

I love the following passage from Dan Meyer. He is commenting on a survey reported here (scroll to the bottom of the page), in which math teachers were asked what the greatest limitations are on how they teach. 38.79% said, "students who are uninterested", and 23.56% said, "students who are disruptive". Says Meyer:

It's like reading a survey of firefighters in which, when asked about the greatest limitation on how they fight fires, 38.79% responded "all the fires" and 23.56% responded "being a first responder."

Students who are uninterested are part of the job. I don't know that we can make every student who walks into my classroom interested in CS. But I do know that we can teach CS in ways that lose the interest of too many students with the potential to succeed. Trust me; I've done it.

Also, let's not blame the K-12 system for creating students who are predisposed to be uninterested and to lack curiosity. It does not matter if that is true or not. My job is to teach the students in my classroom, not some mythical students somewhere else.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 30, 2014 4:29 PM

Sometimes, The Truth Is Boring

MacKenzie Bezos, in her Amazon review of The Everything Store: Jeff Bezos and the Age of Amazon, writes:

One of the biggest challenges in non-fiction writing is the risk that a truthfully balanced narration of the facts will be boring, and this presents an author with some difficult choices.

Teachers face these choices all the time, too. Whenever I teach a course, I want to help my students be excited about the ideas and tools that we are studying. I like to tell stories that entertain as well as illuminate. But not every moment of learning a new programming language, or a new programming style, or a set of mathematical ideas, is going to have my students on the edges of their seats.

The best I can hope for is that the exciting parts of the course will give us the momentum we need to make it through the more boring parts. A big part of education is learning that the best parts of a course are the motivation for doing the hard work that gets us to the next exciting idea.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 26, 2014 3:27 PM

An Argument Against Personalizing Instruction

Most people seem to believe that personalizing instruction to each individual is an unalloyed good. However, Benjamin Riley argues that two common axioms of individualized instruction "run afoul of our current understanding of cognition":

  • Students will learn more if they have more control over what they learn (the "path").
  • Students will learn more if they have more control over when they learn (the "pace").

He says that both run the risk of giving the learner too much freedom.

Path. Knowledge is cumulative, and students need a suitable context in which to interpret and assimilate new information. If they try to learn things in the wrong order, they may not be able to make sense of the new information. They are also more likely to become frustrated, which impedes learning further.

Pace. Thinking is hard, and learning isn't always fun. Most people have a natural tendency to shy away from difficult or unpleasant tasks, and as a result can slow our overall rate of learning when we have to choose what to work on next.

(Dan Meyer offers a second reason to doubt the pace axiom: a lot of the fun and insight that comes from learning happens when we learn synchronously with a group.)

Of course, we could take Riley's arguments to their extremes and eliminate any consideration of the individual from our instructional plans. That would be a mistake. For example, each student comes into the classroom with a particular level of understanding and a particular body of background knowledge. When we take this background into account in a reasonable way, then we should be able to maximize each student's learning potential. When we don't, we unnecessarily limit their learning.

However, on balance, I agree with Riley's concerns. Some of my university students benefit greatly when given control over their own learning. Most, though, struggle making choices about what to think about next and why. They also tend not to give themselves enough credit for how much they can learn if only they put in the time and energy studying and practicing. They need help with both path and pace.

I've been teaching long enough now to respect the value that comes with experience as a teacher. By no means am I a perfect teacher, but after teaching a course for a few times I begin to see ways in which I can order topics and pace the coverage in ways that help more students succeed in the course. I don't think I appreciated this when I was a student. The best teachers I ever had were the ones who had this experience and used it well.

I'll stick with my usual approach of trying to design a curriculum intentionally with regard to bother order and timing, while at the same time trying to take my students' current knowledge into account as we move through the course.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 25, 2014 2:03 PM

You Shouldn't Need a License to Program

In Generation Liminal, Dorian Taylor recalls how the World Wide Web arrived at the perfect time in his life:

It's difficult to appreciate this tiny window of opportunity unless you were present for it. It was the World-Wild West, and it taught me one essential idea: that I can do things. I don't need a license, and I don't need credentials. I certainly don't need anybody telling me what to do. I just need the operating manual and some time to read it. And with that, I can bring some amazing -- and valuable -- creations to life.

I predate the birth of the web. But when we turned on the computers at my high school, BASIC was there. We could program, and it seemed the natural thing to do. These days, the dominant devices are smart phones and iPads and tablets. Users begin their experience far away from the magic of creating. It is a user experience for consumers.

One day many years ago, my older daughter needed to know how many words she had written for a school assignment. I showed her Terminal.app and wc. She was amazed by its simplicity; it looked like nothing else she'd ever seen. She still uses it occasionally.

I spent several days last week watching middle schoolers -- play. They consumed other people's creations, including some tools my colleagues set up for them. They have creative minds, but for the most part it doesn't occur to them that they can create things, too.

We need to let them know they don't need our permission to start, or credentials defined by anyone else. We need to give them the tools they need, and the time to play with them. And, sometimes, we need to give them a little push to get started.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 23, 2014 3:13 PM

The Coder's High Beats The Rest

At least David Auerbach thinks so. One of the reasons is that programming has a self-perpetuating cycle of creation, implementation, repair, and new birth:

"Coding" isn't just sitting down and churning out code. There's a fair amount of that, but it's complemented by large chunks of testing and debugging, where you put your code through its paces and see where it breaks, then chase down the clues to figure out what went wrong. Sometimes you spend a long time in one phase or another of this cycle, but especially as you near completion, the cycle tightens -- and becomes more addictive. You're boosted by the tight feedback cycle of coding, compiling, testing, and debugging, and each stage pretty much demands the next without delay. You write a feature, you want to see if it works. You test it, it breaks. It breaks, you want to fix it. You fix it, you want to build the next piece. And so on, with the tantalizing possibility of -- just maybe! -- a perfect piece of code gesturing at you in the distance.

My experience is similar. I can get lost for hours in code, and come out tired but mentally energized. Writing has never given me that kind of high, but then I've not written a really long piece of prose in a long time. Perhaps writing fiction could give me the sort of high I experience when deep in a program.

What about playing games? Back in my younger days, I experienced incredible flow while playing chess for long stretches. I never approached master level play, but a good game could still take my mind to a different level of consciousness. That high differed from a coder's high, though, in that it left me tired. After a three-round day at a chess tournament, all I wanted to do was sleep.

Getting lost in a computer game gives me a misleading feeling of flow, but it differs from the chess high. When I come out of a session lost in most computer games, I feel destroyed. The experience doesn't give life the way coding does, or the way I imagine meditation does. I just end up feeling tired and used. Maybe that's what drug addiction feels like.

I was thinking about computer games even before reading Auerbach's article. Last week, I was sitting next to one of the more mature kids in our summer camp after he had just spent some time gaming, er, collecting data for our our study of internet traffic. We had an exchange that went something like this:

Student: I love this feeling. I'd like to create a game like this some day.

Eugene: You can!

Student: Really? Where?

Eugene: Here. A group of students in my class last month wrote a computer game next door. And it's way cooler than playing a game.

I was a little surprised to find that this young high schooler had no idea that he could learn computer programming at our university. Or maybe he didn't make the connection between computer games and computer programs.

In any case, this is one of the best reasons for us CS profs to get out of their university labs and classrooms and interact with younger students. Many of them have no way of knowing what computer science is, what they can do with computer science, or what computer science can do for them -- unless we show them!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 17, 2014 2:38 PM

Cookies, Games, and Websites: A Summer Camp for Kids

Cut the Rope 2 logo

Today is the first day of Cookies, Games, and Websites, a four-day summer camp for middle-school students being offered by our department. A colleague of mine developed the idea for a workshop that would help kids of that age group understand better what goes on when they play games on their phones and tablets. I have been helping, as a sounding board for ideas during the prep phase and now as a chaperone and helper during the camp. A local high school student has been providing much more substantial help, setting up hardware and software and serving as a jack-of-all-trades.

The camp's hook is playing games. To judge from this diverse group of fifteen students from the area, kids this age already know very well how to download, install, and play games. Lots of games. Lots and lots of games. If they had spent as much time learning to program as they seem to have spent playing games, they would be true masters of the internet.

The first-order lesson of the camp is privacy. Kids this age play a lot of games, but they don't have a very good idea how much network traffic a game like Cut the Rope 2 generates, or how much traffic accessing Instagram generates. Many of their apps and social websites allow them to exercise some control over who sees what in their space, but they don't always know what that means. More importantly, they don't realize how important all this all is, because they don't know how much traffic goes on under the hood when they use their mobiles devices -- and even when they don't!

127.0.0.1

The second-order lesson of the camp, introduced as a means to an end, is computing: the technology that makes communication on the web possible, and some of the tools they can use to look at and make sense of the network traffic. We can use some tools they already know and love, such as Google maps, to visualize the relevant data.

This is a great idea: helping young people understand better the technology they use and why concepts like privacy matter to them when they are using that technology. If the camp is successful, they will be better-informed users of on-line technology, and better prepared to protect their identities and privacy. The camp should be a lot of fun, too, so perhaps one or two of them will be interested diving deeper into computer science after the camp is over.

This morning, the campers learned a little about IP addresses and domain names, mostly through interactive exercises. This afternoon, they are learning a little about watching traffic on the net and then generating traffic by playing some of their favorite games. Tomorrow, we'll look at all the traffic they generated playing, as well as all the traffic generated while their tablets were idle overnight.

We are only three-fourths of the way through Day 1, and I have already learned my first lesson: I really don't want to teach middle school. The Grinch explains why quite succinctly: noise, noise, NOISE! One thing seems to be true of any room full of fifteen middle-school students: several of them are talking at any given time. They are fun people to be around, but they are wearing me out...


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 12, 2014 2:29 PM

Points of Emphasis for Teaching Design

As I mentioned recently, design skills were a limiting factor for some of the students in my May term course on agile software development. I saw similar issues for many in my spring Algorithms course as well. Implementing an algorithm from lecture or reading was straightforward enough, but organizing the code of the larger system in which the algorithm resided often created challenges for students.

I've been thinking about ways to improve how I teach design in the future, both in courses where design is a focus and in courses where it lives in the background of other material. Anything I come up with can be also part of conversation with colleagues as we talk about design in their courses.

I read Kent Beck's initial Responsive Design article when it first came out a few years ago and blogged about it then, because it had so many useful ideas for me and my students. I decided to re-read the article again last week, looking for a booster shot of inspiration.

First off, it was nice to remember how many of the techniques and ideas that Kent mentions already play a big role in my courses. Ones that stood out on this reading included:

  • taking safe steps,
  • isolating changes within modules,
  • recognizing that design is a team sport, fundamentally a social activity, and
  • playing with words and pictures.

My recent experiences in the classroom made two other items in Kent's list stand out as things I'll probably emphasize more, or at least differently, in upcoming courses.

Exploit Symmetries. Divide similar elements into identical parts and different parts.

As I noted in my first blog about this article, many programmers find it counterintuitive to use duplication as a tool in design. My students struggle with this, too. Soon after that blog entry, I described an example of increasing duplication in order to eliminate duplication in a course. A few years later, in a fit of deja vu, I wrote about another example, in which code duplication is a hint to think differently about a problem.

I am going to look for more opportunities to help students see ways in which they can make design better by isolating code into the identical and the different.

Inside or Outside. Change the interface or the implementation but not both at the same time.

This is one of the fundamental tenets of design, something students should learn as early as possible. I was surprised to see how normal it was for students in my agile development course not to follow this pattern, even when it quickly got them into trouble. When you try to refactor interface and implementation at the same time, things usually don't go well. That's not a safe step to take...

My students and I discussed writing unit tests before writing code a lot during the course. Only afterward did it occur to me that Inside or Outside is the basic element of test-first programming and TDD. First, we write the test; this is where we design the interface of our system. Then, we write code to pass the test; this is where we implement the system.

Again, in upcoming courses, I am going to look for opportunities to help students think more effectively about the distinction between the inside and the outside of their code.

Thus, I have a couple of ideas for the future. Hurray! Even so, I'm not sure how I feel about my blog entry of four years ago. I had the good sense to read Kent's article back then, draw some good ideas from it, and write a blog entry about them. That's good. But here I am four years later, and I still feel like I need to make the same sort of improvements to how I teach.

In the end, I am glad I wrote that blog entry four years ago. Reading it now reminds me of thoughts I forgot long ago, and reminds me to aim higher. My opening reference to getting a booster shot seems like a useful analogy for talking about this situation in my teaching.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 07, 2014 10:17 AM

Pascal, Forgiveness, and CS1

Last time, I thought about the the role of forgiveness in selecting programming languages for instruction. I mentioned that BASIC had worked well for me as a first programming language, as it had worked for so many others. Yet I would probably would never choose it as a language for CS1, at least for more than a few weeks of instruction. It is missing a lot of the features that we want CS majors to learn about early. It's also a bit too free.

In that post, I did say that I still consider Pascal a good standard for first languages. It dominated CS1 for a couple of decades. What made it work so well as a first instructional language?

Pascal struck a nice balance for its time. It was small enough that students could master it all, and also provided constructs for structured programming. It had the sort of syntax that enabled a compiler to provide students guidance about errors, but its compilers did not seem overbearing. It had a few "gothchas", such as the ; as a statement separator, but not so many that students were constantly perplexed. (Hey to C++.) Students were able try things out and get programs to work without becoming demoralized by a seemingly endless stream of complaints.

(Aside: I have to admit that I liked Pascal's ; statement separator. I understood it conceptually and, in a strange way, appreciated it aesthetically. Most others seem to have disagreed with me...)

Python has attracted a lot of interest as a CS1 language in recent years. It's the first popular language in a long while that brings to mind Pascal's feel for me. However, Pascal had two things that supported the teaching of CS majors that Python does not: manifest types and pointers. I love dynamically-typed languages with managed memory and prefer them for my own work, but using that sort of language in CS1 creates some new challenges when preparing students for upper-division majors courses.

So, Pascal holds a special place for me as a CS1 language, though it was not the language I learned there. We used it to teach CS1 for many years and it served me and our students well. I think it balances a good level of forgiveness with a reasonable level of structure, all in a relatively small package.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 06, 2014 4:24 PM

Programming Languages and the Right Level of Forgiveness

In the last session of my May term course on agile software development, discussion eventually turned to tools and programming languages. We talked about whether some languages are more suited to agile development than others, and whether some languages are better tools for a given developer team at a given time. Students being students, we also discussed the courses used in CS courses, including the intro course.

Having recently thought some about choosing the right languages for early CS instruction, I was interested to hear what students thought. Haskell and Scala came up; they are the current pet languages of students in the course. So did Python, Java, and Ada, which are languages our students have seen in their first-year courses. I was the old guy in the room, so I mentioned Pascal, which I still consider a good standard for comparing CS1 languages, and classic Basic, which so many programmers of my generation and earlier learned as their first exposure to the magic of making computers do our bidding.

Somewhere in the conversation, an interesting idea came up regarding the first language that people learn: good first languages provide the right amount of forgiveness when programmers make mistakes.

A language that is too forgiving will allow the learner to be sloppy and fall into bad habits.

A language that is not forgiving enough can leave students dispirited under a barrage of not good enough, a barrage of type errors and syntax gotchas.

What we mean by 'forgiving' is hard to define. For this and other reasons, not everyone agrees with this claim.

Even when people agree in principle with this idea, they often have a hard time agreeing on where to draw the line between too forgiving and not forgiving enough. As with so many design decisions, the correct answer is likely a local maximum that balances the forces at play among the teachers, students, and desired applications involved.

I found Basic to be just right. It gave me freedom to play, to quickly make interesting programs run, and to learn from programs that didn't do what I expected. For many people's taste, though Basic is too forgiving and leads to diseased minds. (Hey to Edsger Dijkstra.) Maybe I was fortunate to learn how to use GOSUBs early and well.

Haskell seems like a language that would be too unforgiving for most learners. Then again, neither my students nor I have experience with it as a first-year language, so maybe we are wrong. We could imagine ways in which learning it first would lead to useful habits of thought about types and problem decomposition. We are aware of schools that use Haskell in CS1; perhaps they have made it work for them. Still, it feels a little too judgmental...

In the end, you can't overlook context and the value of good tools. Maybe these things shift the line of "just right" forgiveness for different audiences. In any case, finding the right level seems to be a useful consideration in choosing a language.

I suspect this is true when choosing languages to work in professionally, too.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 05, 2014 2:45 PM

Choosing the Right Languages for Early CS Instruction is Important

In today's ACM interview, Donald Knuth identifies one of the problems he has with computer science instruction:

Similarly, the most common fault in computer classes is to emphasize the rules of specific programming languages, instead of to emphasize the algorithms that are being expressed in those languages. It's bad to dwell on form over substance.

I agree. The challenges are at least two in number:

  • ... finding the right level of support for the student learning his or her first language. It is harder for students to learn their first language than many people realize until after they've tried to teach them.

  • ... helping students develop the habit and necessary skills to learn new languages on their own with some facility. For many, this involves overcoming the fear they feel until they have done it on their own a time or two.

Choosing the right languages can greatly help in conquering Challenges 1 and 2. Choosing the wrong languages can make overcoming them almost impossible, if only because we lose students before they cross the divide.

I guess that makes choosing the right languages Challenge 3.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 30, 2014 4:09 PM

Programming is Social

The February 2014 issue of Math Horizons included A Conversation With Steven Strogatz, an interview conducted by Patrick Honner. The following passage came to mind this week:

PH: Math is collaborative?

SS: Yeah, math is social. ... The fact that math is social would come as a surprise to the people who think of it as antisocial.

PH: It might also come as a surprise to some math teachers!

SS: It's extremely social. Mathematicians constantly spend time talking to each other about places where they're stuck. They get insights from each other, new ways of looking at things. Sometimes it's just to commiserate.

Programming is social, too. Most people think it's not. With assistance from media portrayals of programmers and sloppy stereotypes of our own, they think most of us would prefer to work alone in the dark. Some do, of course, but even then most programmers I know like to talk shop with other programmers all the time. They like to talk about the places where they are stuck, as well as the places they used to be stuck. War stories are the currency of the programmer community.

I think a big chunk of the "programming solo" preference many programmers profess is learned habit. Most programming instruction and university course work encourages or requires students to work alone. What if we started students off with pair programming in their CS 1 course, and other courses nurtured that habit throughout the rest of their studies? Perhaps programmers would learn a different habit.

My agile software development students this semester are doing all of their project work via pair programming. Class time is full of discussion: about the problem they are solving, about the program they are evolving, and about the intricacies of Java. They've been learning something about all three, and a large part of that learning has been social.

They've only been trying out XP for a couple of weeks, so naturally the new style hasn't replaced their old habits. I see them fall out of pairing occasionally. One partner will switch off to another computer to look up the documentation for a Java class, and pretty soon both partners are quietly looking at their own screens. Out of deference to me or the course, though, they return after a couple of minutes and resume their conversation. (I'm trying to be a gentle coach, not a ruthless tyrant, when it comes to the practices.)

I suspect a couple members of the class would prefer to program on their own, even after noticing the benefits of pairing. Others really enjoy pair programming but may well fall back into solo programming after the class ends. Old habits die hard, if at all. That's too bad, because most of us are better programmers when pairing.

But even if they do choose, or fall back into, old habits, I'm sure that programming will remain a social activity for them, at some level. There are too many war stories to tell.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 28, 2014 4:20 PM

Programming for Everyone, Intro Physics Edition

Rhett Allain asked his intro physics students to write a short bit of Python code to demonstrate some idea from the course, such as the motion of an object with a constant force, or projectile motion with air resistance. Apparently, at least a few complained: "Wait! I'm not a computer scientist." That caused Allain to wonder...

I can just imagine the first time a physics faculty told a class that they needed to draw a free body diagram of the forces on an object for the physics solutions. I wonder if a student complained that this was supposed to be a physics class and not an art class.

As Allain points out, the barriers that used to prevent students from doing numerical calculations in computer programs have begun to disappear. We have more accessible languages now, such as Python, and powerful computers are everywhere, capable of running VPython and displaying beautiful visualizations.

About all that remains is teaching all physics students, even the non-majors, a little programming. The programs they write are simply another medium through which they can explore physical phenomena and perhaps come to understand them better.

Allain is exactly right. You don't have to be an artist to draw simple diagrams or a mathematician to evaluate an integral. All students accept, if grudgingly, that people might reasonably expect them to present an experiment orally in class.

Students don't have to be "writers", either, in order for teachers or employers to reasonably expect them to write an essay about physics or computer science. Even so, you might be surprised how many physics and computer science students complain if you ask them to write an essay. And if you dare expect them to spell words correctly, or to write prose somewhat more organized than Faulkner stream of consciousness -- stand back.

(Rant aside, I have been quite lucky this May term. I've had my students write something for me every night, whether a review of something they've read or a reflection on the practices they are struggling to learn. There's been nary a complaint, and most of their writings have been organized, clear, and enjoyable to read.)

You don't have to be a physicist to like physics. I hope that most educated adults in the 21st century understand how the physical world works and appreciate the basic mechanisms of the universe. I dare to hope that many of them are curious enough to want to learn more.

You also don't have to be a computer programmer, let alone a computer scientist, to write a little code. Programs are simply another medium through which we can create and express ideas from across the spectrum of human thought. Hurray to Allain for being in the vanguard.

~~~~

Note. Long-time readers of this blog may recognize the ideas underlying Allain's approach to teaching introductory physics. He uses Matter and Interactions, a textbook and set of supporting materials created by Ruth Chabay and Bruce Sherwood. Six years ago, I wrote about some of Chabay's and Sherwood's ideas in an entry on creating a dialogue between science and CS and mentioned the textbook project in an entry on scientists who program. These entries were part of a report on my experiences attending SECANT, a 2007 NSF workshop on the intersection of science, computation, and education.

I'm glad to see that the Matter and Interactions project continued to fruition and has begun to seep into university physics instruction. It sounds like a neat way to learn physics. It's also a nice way to pick up a little "stealth programming" along the way. I can imagine a few students creating VPython simulations and thinking, "Hey, I'd like to learn more about this programming thing..."


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 23, 2014 12:27 PM

Words Matter, Even in Code

Jim Weirich on dealing with failure in Ruby, via Avdi Grimm's blog:

(An aside, because I use exceptions to indicate failures, I almost always use the "fail" keyword rather than the "raise" keyword in Ruby. Fail and raise are synonyms so there is no difference except that "fail" more clearly communicates that the method has failed. The only time I use "raise" is when I am catching an exception and re-raising it, because here I'm *not* failing, but explicitly and purposefully raising an exception. This is a stylistic issue I follow, but I doubt many other people do).

Words matter: the right words, used at the right times. Weirich always cared about words, and it showed both in his code and in his teaching and writing.

The students in my agile class got to see my obsession with word choice and phrasing in class yesterday, when we worked through the story cards they had written for their project. I asked questions about many of their stories, trying to help them express what they intended as clearly as possible. Occasionally, I asked, "How will you write the test for this?" In their proposed test we found what they really meant and were able to rephrase the story.

Writing stories is hard, even for experienced programmers. My students are doing this for the first time, and they seemed to appreciate the need to spend time thinking about their stories and looking for ways to make them better. Of course, we've already discussed the importance of good names, and they've already experienced that way in which words matter in their own code.

Whenever I hear someone say that oral and verbal communication skills aren't all that important for becoming a good programmer, I try to help them see that they are, and why. Almost always, I find that they are not programmers and are just assuming that we techies spend all our time living inside mathematical computer languages. If they had ever written much software, they'd already know.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 19, 2014 4:09 PM

Becoming More Agile in Class, No. 2

After spending a couple of days becoming familiar with pair programming and unit tests, for Day 4 we moved on to the next step: refactoring. I had the students study the "before" code base from Martin Fowler's book, Refactoring, to identify several ways they thought we could improve it. Then they worked in pairs to implement their ideas. The code itself is pretty simple -- a small part of the information system for a movie rental store -- and let the students focus on practice with tools, running tests, and keeping the code base "green".

We all know Fowler's canonical definition of refactoring:

Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.

... but it's easy to forget that refactoring really is about design. Programmers with limited experience in Java or OOP can bring only so much to the conversation about improving an OO program written in Java. We can refactor confidently and well only if we have a target in mind, one we understand and can envision in our code. Further, creating a good software design requires taste, and taste generally comes from experience.

I noticed this lack of experience manifesting itself in the way my students tried to decompose the work of a refactoring into small, safe steps. When we struggle with decomposing a refactoring, we naturally struggle with choosing the next step to work on. Kent Beck calls this the challenge of succession. Ordering the steps of a refactoring is a more subtle challenge than many programmers realize at first.

This session reminded me why I like to teach design and refactoring in parallel: coming to appreciate new code smells and quickly learning how to refactor code into a better state. This way, programming skill grows along side the design skill.

On Day 5, we tried to put the skills from the three previous days all together, using an XP-style test-code-refactor-repeat cycle to implement a bit of code. Students worked on either the Checkout kata from Dave Thomas or a tic-tac-toe game based on a write-up by Gojko Adzic. No, these are not the most exciting programs to build, but as I told the class, this makes it possible for them to focus on the XP practices and habits of mind -- small steps, unit tests, and refactoring -- without having to work too hard to grok the domain.

My initial impression as the students worked was that the exercise wasn't going as well as I had hoped it would. The step size was too big, and the tests were too intrusive, and the refactoring was almost non-existent. Afterwards, though, I realized that programmers learning such foreign new habits must go through this phase. The best I can do is inject an occasional suggestion or question, hoping that it helps speed them along the curve.

This morning, I decided to have each student pair up with someone who had worked on the other task last time, flip a coin, and work on the one of the same two tasks. This way, each pair had someone working on the same problem again and someone working on a new problem. I instructed them to start from scratch -- new code, new thoughts -- and have the person new to the task write the first test.

The goal wass to create an asymmetry within each pair. Working on the same piece again would be valuable for the partner doing so, in the way playing finger exercises or etudes is valuable for a musician. At the same time, the other partner would see a new problem, bringing fresh eyes and thoughts to the exercise. This approach seems like a good one, as it varies the experience for both members of the pair. I know how important varying the environment can be for student learning, but I sometimes forget to do that often enough in class.

The results seemed so much better today. Students commented that they made better progress this time around, not because one of them had worked on the same problem last time, but because they were feeling more comfortable with the XP practices. One students something to the effect,

Last time, we were trying to work on the simplest or smallest piece of code we could write. This time, we were trying to work on the smallest piece of functionality we could add to the program.

That's a solid insight from an undergrad, even one with a couple of years programming experience.

I also liked the conversation I was hearing among the pairs. They asked each other, "Should we do this feature next, or this other?" and said, "I'm not sure how we can test this." -- and then talked it over before proceeding. One pair had a wider disparity in OO experience, so the more experienced programmer was thinking out loud as he drove, taking into account comments from his partner as he coded.

This is a good sign. I'm under no illusion that they have all suddenly mastered ordering features, writing unit tests, or refactoring. We'll hit bumps over the next three weeks. But they all seem to be pretty comfortable with working together and collaborating on code. That's an essential skill on an agile team.

Next up: the Planning Game for a project that we'll work on for the rest of the class. They chose their own system to build, a cool little Android game app. That will change the dynamic a bit for customer collaboration and story writing, but I think that the increased desire to see a finished product will increase their motivation to master the skills and practice. My job as part-time customer, part-time coach will require extra vigilance to keep them on track.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 14, 2014 4:52 PM

Becoming More Agile in Class

Days 2 and 3 of my Agile Software Development May term course are now in the books. This year, I decided to move as quickly as we could in the lab. Yesterday, the students did their first pair-programming session, working for a little over an hour on one of the industry standard exercises, Conway's Game of Life. Today, they did their first pair programming with unit tests, using Bill Wake's Test-First Challenge to implement the beginnings of a simple data model for spreadsheets.

I always enjoy watching students write code and interacting with them while they do it. The thing that jumped out to me yesterday was just how much code some students write before they ever think about compiling it, let alone testing it. Another was how some students manage to get through a year of programming-heavy CS courses without mastering their basic tools: text editor, compiler, and language. It's hard to work confidently when your tools feel uncomfortable in your hands.

There's not much I can do to help students develop greater facility with their tools than give them lots of practice, and we will do that. However, writing too much code before testing even its syntactic correctness is a matter of mindset and habit. So I opened today's session with a brief discussion, and then showed them what I meant in the form of a short bit of code I wrote yesterday while watching them. Then I turned them loose with Wake's spreadsheet tests and encouragement to help each other write simple code, compile frequently even with short snippets, and run the tests as often as their code compiles.

Today, we had an odd number of students in class, something that's likely to be our standard condition this term, so paired with one of the students on a spreadsheet. He wanted to work in Haskell, and I was game. I refreshed my Haskell memories a bit and even contributed some helpful bits of code, in addition to meta-contributions on XP style.

The student is relatively new to the language, so he's still developing the Haskell in his in his mind. There were times we struggled because we were thinking of the problem in a stateful way. As you surely know, that's not the best way to work in Haskell. Our solutions were not always elegant, but we did our best to get in the rhythm of writing tests, writing code, and running.

As the period was coming to an end, our code had just passed a test that had been challenging us. Almost simultaneously, a student in another thrust his arms in the air as his pair's code passed a challenging test, too, much deeper in the suite. We all decided to declare victory and move on. We'll all get better with practice.

Next up: refactoring, and tools to support it and automated testing.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 12, 2014 5:01 PM

Teaching a Compressed Class

May term started today, so my agile course is off the ground. We will meet for 130 minutes every morning through June 6, excepting only Memorial Day. That's a lot of time spent together in a short period of time.

As I told the students today, each class is almost a week's worth of class in a regular semester. This means committing a fair amount of time out of class every day, on the order of 5-7 hours. There isn't a lot of time for our brains to percolate on the course content. We'll be moving steadily for four weeks.

This makes May term unsuitable, in my mind at least, for a number of courses. I would never teach CS 1 in May term. Students are brand new to the discipline, to programming, and usually to their first programming language. They need time for the brains to percolate. I don't think I'd want to teach upper-division CS courses in May term if they have a lot of content, either. Our brains don't always absorb a lot of information quickly in a short amount of time, so letting it sink in more slowly, helped by practice and repetition, seems best.

My agile course is, on the other hand, almost custom made for a compressed semester. There isn't a lot of essential "content". The idea is straightforward. I don't expect students to memorize lists of practices, or the rules of tools. I expect them to do the practices. Doing them daily, in extended chunks of time, with immediate feedback, is much better than taking a day off between practice sessions.

Our goal is, in part, to learn new habits and then reflect on how well they fit, on where they might help us most and where they might get in the way. We'll have better success learning new habits in the compressed term than we would with breaks. And, as much as I want students to work daily during a fifteen-week semester to build habits, it usually just doesn't happen. Even when the students buy in and intend to work that way, life's other demands get in the way. Failing with good intentions is still failing, and sometimes feels worse than failing without them.

So we begin. Tomorrow we start working on our first practice, a new way of working with skills to be learned through repetition every day the rest of the semester. Wish us luck.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 09, 2014 4:11 PM

Transition

Spring semester ends today. May term begins Monday. I haven't taught during the summer since 2010, when I offered a course on agile software development. I'm reprising that course this month, with nine hardy souls signed on for the mission. That means no break for now, just a new start. I like those.

I'm sure I could blog for hours on the thoughts running through my head for the course. They go beyond the readings we did last time and the project we built, though all that is in the mix, too.

For now, though, three passages that made the highlights of my recent reading. All fit nicely with the theme of college days and transition.

~~~~

First, this reminder from John Graham, a "self-made merchant" circa 1900, in a letter to his son at college.

Adam invented all the different ways in which a young man can make a fool of himself, and the college yell at the end of them is just a frill that doesn't change essentials.

College is a place all its own, but it's just a place. In many ways, it's just the place where young people spend a few years while they are young.

~~~~

Next, a writer tells a story of studying with Annie Dillard in college. During their last session together, she told the class:

If I've done my job, you won't be happy with anything you write for the next 10 years. It's not because you won't be writing well, but because I've raised your standards for yourself.

Whatever we "content" teach our students, raising their standards and goals is sometimes the most important thing we do. "Don't compare yourselves to each other", she says. Compare yourselves to the best writers. "Shoot there." This advice works just as well for our students, whether they are becoming software developers or computer scientists. (Most of our students end up being a little bit of both.)

It's better to aim at the standard set by Ward Cunningham or Alan Kay than at the best we can imagine ourselves doing right now.

~~~~

Now that I think about it, this last one has nothing to do with college or transitions. But it made me laugh, and after a long academic year, with no break imminent, a good laugh is worth something.

What do you call a rigorous demonstration that a statement is true?
  1. If "proof", then you're a mathematician.
  2. If "experiment", then you're a physicist.
  3. If you have no word for this concept, then you're an economist.

This is the first of several items in The Mathematical Dialect Quiz at Math with Bad Drawings. It adds a couple of new twists to the tongue-in-cheek differences among mathematicians, computer scientists, and engineers. With bad drawings.

Back to work.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 05, 2014 4:35 PM

Motivated by Teaching Undergrads

Recently, a gentleman named Seth Roberts passed away. I didn't know Roberts and was not familiar with his work. However, several people I respect commented on his life and career, so I took a look at one colleague's reminiscence. Roberts was an interesting fellow who didn't do things the usual way for a research academic. This passage stood out:

Seth's academic career was unusual. He shot through college and graduate school to a tenure-track job at a top university, then continued to do publication-quality research for several years until receiving tenure. At that point he was not a superstar but I think he was still considered a respected member of the mainstream academic community. But during the years that followed, Seth lost interest in that thread of research (you can see this by looking at the dates of most of his highly-cited papers). He told me once that his shift was motivated by teaching introductory undergraduate psychology: the students, he said, were interested in things that would affect their lives, and, compared to that, the kind of research that leads to a productive academic career did not seem so appealing.

That last sentence explains, I think, why so many computer science faculty at schools that are not research-intensive end up falling away from traditional research and publishing. When you come into contact with a lot of undergrads, you may well find yourself caring more deeply about things that will affect their lives in a more direct way. Pushing deeper down a narrow theoretical path, or developing a novel framework for file system management that most people will never use, may not seem like the best way to use your time.

My interests have certainly shifted over the years. I found myself interested in software development, in particular tools and practices that students can use to make software more reliably and teaching practices that would students learn more effectively. Fortunately, I've always loved programming qua programming, and this has allowed me to teach different programming styles with an eye on how learning them will help my students become better programmers. Heck, I was even able to stick with it long enough that functional programming became popular in industry! I've also been lucky that my interest in languages and compilers has been of interest to students and employers over the last few years.

In any event, I can certainly understand how Roberts diverged from the ordained path and turned his interest to other things. One challenge for leaving the ordained path is to retain the mindset of a scientist, seeking out opportunities to evaluate ideas and to disseminate the ones that appear to hold up. You don't need to publish in the best journals to disseminate good ideas widely. That may not even be the best route.

Another challenge is to find a community of like-minded people in which to work. An open, inquisitive community is a place to find new ideas, a place to try ideas out before investing too much in a doomed one, and a place to find the colleagues most of us need to stay sane while exploring what interests. The software and CS worlds have helped create the technology that makes it possible to grow such communities in new ways, and our own technology now supports some amazing communities of software and CS people. It is a good time to be an academic or developer.

I've enjoyed reading about Roberts' career and learning about what seems to have been one of the academia's unique individuals. And I certainly understand how teaching introductory undergrads might motivate a different worldview for an academic. It's good to be reminded that it's okay to care about the things that will affect the lives of our students now rather than later.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

March 29, 2014 10:06 AM

Sooner

That is the advice I find myself giving to students again and again this semester: Sooner.

Review the material we cover in class sooner.

Ask questions sooner.

Think about the homework problems sooner.

Clarify the requirements sooner.

Write code sooner.

Test your code sooner.

Submit a working version of your homework sooner. You can submit a more complete version later.

A lot of this advice boils down to the more general Get feedback sooner. In many ways, it is a dual of the advice, Take small steps. If you take small steps, you can ask, clarify, write, and test sooner. One of the most reliable ways to do those things sooner is to take small steps.

If you are struggling to get things done, give sooner a try. Rather than fail in a familiar way, you might succeed in an unfamiliar way. When you do, you probably won't want to go back to the old way again.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 12, 2014 3:55 PM

Not Content With Content

Last week, the Chronicle of Higher Ed ran an article on a new joint major at Stanford combining computer science and the humanities.

[Students] might compose music or write a short story and translate those works, through code, into something they can share on the web.

"For students it seems perfectly natural to have an interest in coding," [the program's director] said. "In one sense these fields might feel like they're far apart, but they're getting closer and closer."

The program works in both directions, by also engaging CS students in the societal issues created by ubiquitous networks and computing power.

We are doing something similar at my university. A few years ago, several departments began to collaborate on a multidisciplinary program called Interactive Digital Studies which went live in 2012. In the IDS program, students complete a common core of courses from the Communication Studies department and then take "bundles" of coursework involving digital technology from at least two different disciplines. These areas of emphasis enable students to explore the interaction of computing with various topics in media, the humanities, and culture.

Like Stanford's new major, most of the coursework is designed to work at the intersection of disciplines, rather than pursuing disciplines independently, "in parallel".

The initial version of the computation bundle consists of an odd mix of application tools and opportunities to write programs. Now that the program is in place, we are finding that students and faculty alike desire more depth of understanding about programming and development. We are in the process of re-designing the bundle to prepare students to work in a world where so many ideas become web sites or apps, and in which data analytics plays an important role in understanding what people do.

Both our IDS program and Stanford's new major focus on something that we are seeing increasingly at universities these days: the intersections of digital technology and other disciplines, in particular the humanities. Computational tools make it possible for everyone to create more kinds of things, but only if people learn how to use new tools and think about their work in new ways.

Consider this passage by Jim O'Loughlin, a UNI English professor, from a recent position statement on the the "digital turn" of the humanities:

We are increasingly unlikely to find writers who only provide content when the tools for photography, videography and digital design can all be found on our laptops or even on our phones. It is not simply that writers will need to do more. Writers will want to do more, because with a modest amount of effort they can be their own designers, photographers, publishers or even programmers.

Writers don't have to settle for producing "content" and then relying heavily on others to help bring the content to an audience. New tools enable writers to take greater control of putting their ideas before an audience. But...

... only if we [writers] are willing to think seriously not only about our ideas but about what tools we can use to bring our ideas to an audience.

More tools are within the reach of more people now than ever before. Computing makes that possible, not only for writers, but also for musicians and teachers and social scientists.

Going further, computer programming makes it possible to modify existing tools and to create new tools when the old ones are not sufficient. Writers, musicians, teachers, and social scientists may not want to program at that level, but they can participate in the process.

The critical link is preparation. This digital turn empowers only those who are prepared to think in new ways and to wield a new set of tools. Programs like our IDS major and Stanford's new joint major are among the many efforts hoping to spread the opportunities available now to a larger and more diverse set of people.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

March 11, 2014 4:52 PM

Change The Battle From Arguments To Tests

In his recent article on the future of the news business, Marc Andreessen has a great passage in his section on ways for the journalism industry to move forward:

Experimentation: You may not have all the right answers up front, but running many experiments changes the battle for the right way forward from arguments to tests. You get data, which leads to correctness and ultimately finding the right answers.

I love that clause: "running many experiments changes the battle for the right way forward from arguments to tests".

While programming, it's easy to get caught up in what we know about the code we have just written and assume that this somehow empowers us to declare sweeping truths about what to do next.

When students are first learning to program, they often fall into this trap -- despite the fact that they don't know much at all. From other courses, though, they are used to thinking for a bit, drawing some conclusions, and then expressing strongly-held opinions. Why not do it with their code, too?

No matter who we are, whenever we do this, sometimes we are right, and sometimes, we are wrong. Why leave it to chance? Run a simple little experiment. Write a snippet of code that implements our idea, and run it. See what happens.

Programs let us test our ideas, even the ideas we have about the program we are writing. Why settle for abstract assertions when we can do better? In the end, even well-reasoned assertions are so much hot air. I learned this from Ward Cunningham: It's all talk until the tests run.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

March 08, 2014 10:18 AM

Sometimes a Fantasy

This week I saw a link to The Turing School of Software & Design, "a seven-month, full-time program for people who want to become professional developers". It reminded me of Neumont University, a ten-year-old school that offers a B.S. degree program in Computer science that students can complete in two and a half years.

While riding the bike, I occasionally fantasize about doing something like this. With the economics of universities changing so quickly [ 1 | 2 ], there is an opportunity for a new kind of higher education. And there's something appealing about being able to work closely with a cadre of motivated students on the full spectrum of computer science and software development.

This could be an accelerated form of traditional CS instruction, without the distractions of other things, or it could be something different. Traditional university courses are pretty confining. "This course is about algorithms. That one is about programming languages." It would be fun to run a studio in which students serve as apprentices making real stuff, all of us learning as we go along.

A few years ago, one of our ChiliPLoP hot topic groups conducted a greenfield thought experiment to design an undergrad CS program outside of the constraints of any existing university structure. Student advancement was based on demonstrating professional competencies, not completing packaged courses. It was such an appealing idea! Of course, there was a lot of hard work to be done working out the details.

My view of university is still romantic, though. I like the idea of students engaging the great ideas of humanity that lie outside their major. These days, I think it's conceivable to include the humanities and other disciplines in a new kind of CS education. In a recent blog entry, Hollis Robbins floats the idea of Home College for the first year of a liberal arts education. The premise is that there are "thousands of qualified, trained, energetic, and underemployed Ph.D.s [...] struggling to find stable teaching jobs". Hiring a well-rounded tutor could be a lot less expensive than a year at a private college, and more lucrative for the tutor than adjuncting.

Maybe a new educational venture could offer more than targeted professional development in computing or software. Include a couple of humanities profs, maybe some a social scientist, and it could offer a more complete undergraduate education -- one that is economical both in time and money.

But the core of my dream is going broad and deep in CS without the baggage of a university. Sometimes a fantasy is all you need. Other times...


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 07, 2014 2:24 PM

Take Small Steps

If a CS major learns only one habit of professional practice in four years, it should be:

Take small steps.

A corollary:

If things aren't working, take smaller steps.

I once heard Kent Beck say something similar, in the context of TDD and XP. When my colleague Mark Jacobson works with students who are struggling, he uses a similar mantra: Solve a simpler problem. As Dr. Nick notes, students and professionals alike should scale the step size according to their level of knowledge or their confidence about the problem.

When I tweeted these thoughts yesterday, two pieces of related advice came in:

  • Slow down. -- Big steps are usually a sign of trying to hurry. Beginners are especially prone to this.

  • Lemma: Keep moving. -- Small steps keep us moving more reliably. We can always fool ourselves into believing that the next big step is all we need...

Of course, I've always been a fan of baby steps and unusual connections to agile software development. They apply quite nicely to learners in many settings.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 01, 2014 11:35 AM

A Few Old Passages

I was looking over a couple of files of old notes and found several quotes that I still like, usually from articles I enjoyed as well. They haven't found their way into a blog entry yet, but they deserve to see the light of day.

Evidence, Please!

From a short note on the tendency even among scientists to believe unsubstantiated claims, both in and out of the professional context:

It's hard work, but I suspect the real challenge will lie in persuading working programmers to say "evidence, please" more often.

More programmers and computer scientists are trying to collect and understand data these days, but I'm not sure we've made much headway in getting programmers to ask for evidence.

Sometimes, Code Before Math

From a discussion of The Expectation Maximization Algorithm:

The code is a lot simpler to understand than the math, I think.

I often understand the language of code more quickly than the language of math. Reading, or even writing, a program sometimes helps me understand a new idea better than reading the math. Theory is, however, great for helping me to pin down what I have learned more formally.

Grin, Wave, Nod

From Iteration Inside and Out, a review of the many ways we loop over stuff in programs:

Right now, the Rubyists are grinning, the Smalltalkers are furiously waving their hands in the air to get the teacher's attention, and the Lispers are just nodding smugly in the back row (all as usual).

As a Big Fan of all three languages, I am occasionally conflicted. Grin? Wave? Nod? Look like the court jester by doing all three simultaneously?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 21, 2014 3:35 PM

Sticking with a Good Idea

My algorithms students and I recently considered the classic problem of finding the closest pair of points in a set. Many of them were able to produce a typical brute-force approach, such as:

    minimum ← 
    for i ← 1 to n do
        for j ← i+1 to n do
            distance ← sqrt((x[i] - x[j])² + (y[i] - y[j])²)
            if distance < minimum then
               minimum ← distance
               first   ← i
               second  ← j
    return (first, second)

Alas, this is an O(n²) process, so we considered whether we might do better with a divide-and-conquer approach. It did not look promising, though. Divide-and-conquer doesn't let us solve the sub-problems independently. What if the closest pair straddles two partitions?

This is a common theme in computing, and problem solving more generally. We try a technique only to find that it doesn't quite work. Something doesn't fit, or a feature of the domain violates a requirement of the technique. It's tempting in such cases to give up and settle for something less.

Experienced problem solvers know not to give up too quickly. Many of the great advances in computing came under conditions just like this. Consider Leonard Kleinrock and the theory of packet switching.

In a Computing Conversations podcast published last year, Kleinrock talks about his Ph.D. research. He was working on the problem of how to support a lot of bursty network traffic on a shared connection. (You can read a summary of the podcast in an IEEE Computer column also published last year.)

His wonderful idea: apply the technique of time sharing from multi-user operating systems. The system could break all messages into "packets" of a fixed size, let messages take turns on the shared line, then reassemble each message on the receiving end. This would give every message a chance to move without making short messages wait too long behind large ones.

Thus was born the idea of packet switching. But there was a problem. Kleinrock says:

I set up this mathematical model and found it was analytically intractable. I had two choices: give up and find another problem to work on, or make an assumption that would allow me to move forward. So I introduced a mathematical assumption that cracked the problem wide open.

His "independence assumption" made it possible for him to complete his analysis and optimize the design of a packet-switching network. But an important question remained: Was his simplifying assumption too big a cheat? Did it skew the theoretical results in such a way that his model was no longer a reasonable approximation of how networks would behave in the real world?

Again, Kleinrock didn't give up. He wrote a program instead.

I had to write a program to simulate these networks with and without the assumption. ... I simulated many networks on the TX-2 computer at Lincoln Laboratories. I spent four months writing the simulation program. It was a 2,500-line assembly language program, and I wrote it all before debugging a single line of it. I knew if I didn't get that simulation right, I wouldn't get my dissertation.

High-stakes programming! In the end, Kleinrock was able to demonstrate that his analytical model was sufficiently close to real-world behavior that his design would work. Every one of us reaps the benefit of his persistence every day.

Sometimes, a good idea poses obstacles of its own. We should not let those obstacles beat us without a fight. Often, we just have to find a way to make it work.

This lesson applies quite nicely to using divide-and-conquer on the closest pairs problem. In this case, we don't make a simplifying assumption; we solve the sub-problem created by our approach:

After finding a candidate for the closest pair, we check to see if there is a closer pair straddling our partitions. The distance between the candidate points constrains the area we have to consider quite a bit, which makes the post-processing step feasible. The result is an O(n log n) algorithm that improves significantly on brute force.

This algorithm, like packet switching, comes from sticking with a good idea and finding a way to make it work. This is a lesson every computer science student and novice programmer needs to learn.

There is a complementary lesson to be learned, of course: knowing when to give up on an idea and move on to something else. Experience helps us tell the two situations apart, though never with perfect accuracy. Sometimes, we just have to follow an idea long enough to know when it's time to move on.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 19, 2014 4:12 PM

Teaching for the Perplexed and the Traumatized

Teaching for the Perplexed and the Traumatized

On need for empathy when writing about math for the perplexed and the traumatized, Steven Strogatz says:

You have to help them love the questions.

Teachers learn this eventually. If students love the questions, they will do an amazing amount of working searching for answers.

Strogatz is writing about writing, but everything he says applies to teaching as well, especially teaching undergraduates and non-majors. If you teach only grad courses in a specialty area, you may be able to rely on the students to provide their own curiosity and energy. Otherwise having empathy, making connections, and providing Aha! moments are a big part of being successful in the classroom. Stories trump formal notation.

This semester, I've been trying a particular angle on achieving this trifecta of teaching goodness: I try to open every class session with a game or puzzle that the students might care about. From there, we delve into the details of algorithms and theoretical analysis. I plan to write more on this soon.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 16, 2014 10:48 AM

Experience Happens When You Keep Showing Up

You know what they say about good design coming from experience, and experience coming from bad design? That phenomenon is true of most things non-trivial. Here's an example from men's college basketball.

The University of Florida has a veteran team. The University of Kentucky has a young team. Florida's players are very good, but not generally considered to be in the same class as Kentucky's highly-regarded players. Yesterday, the two teams played a close game on Kentucky's home floor.

Once they fell behind by five with less than two minutes remaining, Kentucky players panicked. Florida players didn't. Why not? "Well, we have a veteran group here that's panicked before -- that's been in this situation and not handled it well," [Patric] Young said.

How did Florida's players maintain their composure at the end of a tight game on the road against another good team? They had been in that same situation three times before, and failed. They didn't panic this time in large part because they had panicked before and learned from those experiences.

Kentucky's starters have played a total of 124 college games. Florida's four seniors have combined to play 491. That's a lot of experience -- a lot of opportunities to panic, to guess wrong, to underestimate a situation, or otherwise to come up short. And a lot of opportunities to learn.

The young players at Kentucky hurt today. As the author of the linked game report notes, Florida's players have hurt like that before, for coming up short in much the same way, "and they used that pain to get better".

It turns out that composure comes from experience, and experience comes from lack of composure.

As a teacher, I try to convince students not to shy away from the error messages their compiler gives them, or from the convoluted code they eventually sneak past it. Those are the experiences they'll eventually look back to when they are capable, confident programmers. They just need the opportunity to learn.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 14, 2014 3:07 PM

Do Things That Help You Become Less Wrong

My students and I debriefed a programming assignment in class yesterday. In the middle of class, I said, "Now for a big question: How do you know your code is correct?

There were a lot of knowing smiles and a lot of nervous laughter. Most of them don't.

Sure, they ran a few test cases, but after making additions and changes to the code, some were just happy that it still ran. The output looked reasonable, so it must be done. I suggested that they might want to think more about testing.

This morning I read a great quote from Nathan Marz that I will share with my students:

Feedback is everything. Most of the time you're wrong, and feedback is the only way to realize your mistakes and help you become less wrong. This applies to everything.

Most of the time you're wrong. Do things that help you become less wrong. Getting feedback, early and often, is one of the best ways to do this.

A comment by a student earlier in the period foreshadowed our discussion of testing, which made me feel even better. In response to the retrospective question, "What design or programming choices went well for you?", he answered unit tests.

That set me up quite nicely to segue from manual testing into automated testing. If you aren't testing your code early and often, then manual testing is a huge improvement. But you can do even better by pushing those test cases into a form that can be executed quickly and easily, with the code doing the tedious work of verifying the output.

My students are writing code in many different languages, so I showed them testing frameworks in Ruby, Java, and Python. The code looks simple, even with the boilerplate imposed by the frameworks.

The big challenges in getting students to write unit tests are the same as for getting professionals to write them: lack of time, and misplaced confidence. I hope that a few of my students will see that the real time sink is debugging bad code and that a fear of changing code is a lack of confidence. The best way to be confident is to have evidence.

The student who extolled unit tests works in Racket and so has test cases in RackUnit. He set me up nicely for a future discussion, too, when he admitted out loud that he wrote his tests first. This time, it was I who smiled knowingly.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 13, 2014 4:25 PM

Politics in a CS Textbook

Earlier this week, I looking for inspiration for an exam problem in my algorithms course. I started thumbing through a text I briefly considered considering for adoption this semester. (I ended up opting for no text without considering any of them very deeply.)

No Politics warning sign -- handmade

The first problem I read was written with a political spin, at the expense of one of the US political parties. I was aghast. I tweeted:

This textbook uses an end-of-chapter exercise to express an opinion about a political party. I will no longer consider it for adoption.

I don't care which party is made light of or demeaned. I'm no longer interested. I don't want a political point unrelated to my course to interfere with my students' learning -- or with the relationship we are building.

In general, I don't want students to think of me in a partisan way, whether the topic is politics, religion, or anything else outside scope of computer science. It's not my job as a CS instructor to make my students uncomfortable.

That isn't to say that I want students to think of me as bland or one-dimensional. They know me to be a sports fan and a chess player. They know I like to read books and to ride my bike. I even let them know that I'm a Billy Joel fan. All of these give me color and, while they may disagree with my tastes, none of these are likely to create distance between us.

Nor do I want them to think I have no views on important public issues of the day. I have political opinions and actually like to discuss the state of the country and world. Those discussions simply don't belong in a course on algorithms or compiler construction. In the context of a course, politics and religion are two of many unnecessary distractions.

In the end, I did use the offensive problem, but only as inspiration. I wrote a more generic problem, the sort you expect to see in an algorithms course. It included all the relevant features of the original. My problem gave the students a chance to apply what they have have learned in class without any noise. The only way this problem could offend students was by forcing them to demonstrate that they are not yet ready to solve such a problem. Alas, that is an offense that every exam risks giving.

... and yes, I still owe you a write-up on possible changes to the undergraduate algorithms canon, promised in the entry linked above. I have not forgotten! Designing a new course for spring semester is a time-constrained operation.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

February 12, 2014 11:49 AM

They Are Always Watching

Rand's Inbox Reboot didn't do much for me on the process and tool side of things, perhaps other than rousing a sense of familiarity. Been there. The part that stood out for me was when he talked about the ultimate effect of not being in control of the deluge:

As a leader, you define your reputation all the time. You'd like to think that you could choose the moments that define your reputation, but you don't. They are always watching and learning. They are always updating their model regarding who you are and how you lead with each observable action, no matter how large or small.

He was speaking of technical management, so I immediately thought about my time as department head and how true this passage is.

But it is also true -- crucially true -- of the relationship teachers have with their students. There are few individual moments that define how a teacher is seen by his or her students. Students are always watching and learning. They infer things about the discipline and about the teacher from every action, from every interaction. What they learn in all the little moments comes to define you in their minds.

Of course, this is even more true of parents and children. To the extent I have any regrets as a parent, it is that I sometimes overestimated the effect of the Big Moments and underestimated the effect of all those Little Moments, the ones that flow by without pomp or recognition. They probably shaped my daughters, and my daughters' perception of me, more than anything particular action I took.

Start with a good set of values, then create a way of living and working that puts these values at the forefront of everything you do. Your colleagues, your students, and your children are always watching.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Personal, Teaching and Learning

February 07, 2014 3:25 PM

Technology and Change at the University, Circa 1984

cover of David Lodge's Small World, Wendy Edelson

As I mentioned last time, I am re-reading David Lodge's novel Small World. Early on in the story, archetypal academic star and English professor Morris Zapp delivers a eulogy to the research university of the past:

The day of the individual campus has passed. It belongs to an obsolete technology -- railways and the printing press. I mean just look at this campus -- it epitomizes the whole thing: the heavy industry of the mind.

... It's huge, heavy, monolithic. It weighs about a billion tons. You can feel the weight of those buildings, pressing down the earth. Look at the Library -- built like a huge warehouse. The whole place says, "We have learning stored here; if you want it, you have to come inside and get it." Well, that doesn't apply any more.

... Information is much more portable in the modern world than it used to be. So are people. Ergo, it's no longer necessary to hoard your information in one building, or keep your top scholars corralled in one campus.

Small World was published in 1984. The technologies that had revolutionized the universities of the day were the copy machine, universal access to direct-dial telephone, and easy access to jet travel. Researchers no longer needed to be together in one place all the time; they could collaborate at a distance, bridging the distances for short periods of collocation at academic conferences.

Then came the world wide web and ubiquitous access to the Internet. Telephone and Xerox machines were quickly overshadowed by a network of machines that could perform the functions of both phone and copier, and so much more. In particular, they freed information even further from being bound to place. Take out the dated references to Xerox, and most of what Zapp has to say about universities could be spoken today:

And you don't have to grub about in library stacks for data: any book or article that sounds interesting they have Xeroxed and read at home. Or on the plane going to the next conference. I work mostly at home or on planes these days. I seldom go into the university except to teach my courses.

Now, the web is beginning to unbundle even the teaching function from the physical plant of a university and the physical presence of a teacher. One of our professors used to routinely spend hours hanging out with students on Facebook and Google+, answering questions and sharing the short of bonhomie that ordinarily happens after class in the hallways or the student union. -10 degrees outside? No problem.

Clay Shirky recently wrote that higher education's Golden Age has ended -- long ago, actually: "Then the 1970s happened." Most of Shirky's article deals with the way the economics of universities had changed. Technology is only a part of that picture. I like how Lodge's novel shows an awareness even in the early 1980s that technology was also changing the culture of scholarship and consequently scholarship's preeminent institution, the university.

Of course, back then old Morris Zapp still had to go to campus to teach his courses. Now we are all wondering how much longer that will be true for the majority of students and professors.

~~~~

IMAGE. The cover from the first edition of David Lodge's Small World, by Wendy Edelson. http://en.wikipedia.org/wiki/File:SmallWorldNovel.jpg


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 02, 2014 5:19 PM

Things That Make Me Sigh

In a recent article unrelated to modern technology or the so-called STEM crisis, a journalist writes:

Apart from mathematics, which demands a high IQ, and science, which requires a distinct aptitude, the only thing that normal undergraduate schooling prepares a person for is... more schooling.

Sigh.

On the one hand, this seems to presume that one need neither a high IQ nor any particular aptitude to excel in any number of non-math and science disciplines.

On the other, it seems to say that if one does not have the requisite characteristics, which are limited to a lucky few, one need not bother with computer science, math or science. Best become a writer or go into public service, I guess.

I actually think that the author is being self-deprecating, at least in part, and that I'm probably reading too much into one sentence. It's really intended as a dismissive comment on our education system, the most effective outcome of which often seems to be students who are really good at school.

Unfortunately, the attitude expressed about math and science is far too prevalent, even in our universities. It demeans our non-scientists as well as our scientists and mathematicians. It also makes it even harder to convince younger students that, with a little work and practice, they can achieve satisfying lives and careers in technical disciplines.

Like I said, sigh.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 31, 2014 3:13 PM

"What Should It Be?"

When asked to design and implement a program, beginning programmers often aren't sure what data type or data structure to use for a particular value. Should they use an array or a list? Or they've decided to use a record but can't decide exactly what fields to include, or names to give them.

"What should it be?", they ask.

I often have a particular implementation in mind, based on what we've been studying or on my own experience as a programmer, but I prefer not to tell them what to do. This is a great opportunity for them to learn to think about design.

Instead, I ask questions. "What have you considered?" "Do you think one or the other is better?" "Why?"

We discuss how so often there is no "right" answer. There are merely trade-offs. They have to choose. This is a design decision.

But, in making this decision, there's another opportunity to learn something about design. They don't have to commit now and forever to an implementation before proceeding with the rest of their program. Because the rest of the program shouldn't know about their decision anyway!

They should make an object that encapsulates the choice. They are then able to start building the rest of the program without fear that it depends on the details of their design choice. The rest of the program will interact with the object in terms of what the object means in the program, not in terms of how it is implemented. Later, if they change their minds, they will be able to change the implementation details without disturbing the rest of the code.

Yes this is basic stuff, but beginners often struggle with basic stuff. They've learned about ADTs or OOP, and they can talk abstractly about abstraction. But when it comes time to write code, indecision descends upon them. They are afraid of messing up.

If I can help allay their fears of proceeding, then I've contributed something to their work that day. I even suggest that writing the rest of the program might even help them figure out which alternative is better. I like to listen to my code, even if that idea seems strange or absurd to them. Some day soon, it may not.

In any case, they have the beginnings of a program, and perhaps a better idea of what design is all about.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 27, 2014 11:39 AM

The Polymath as Intellectual Polygamist

Carl Djerassi, quoted in The Last Days of the Polymath:

Nowadays people [who] are called polymaths are dabblers -- are dabblers in many different areas. I aspire to be an intellectual polygamist. And I deliberately use that metaphor to provoke with its sexual allusion and to point out the real difference to me between polygamy and promiscuity.

On this view, a dilettante is merely promiscuous, making no real commitment to any love interest. A polymath has many great loves, and loves them all deeply, if not equally.

We tend to look down on dilettantes, but they can perform a useful service. Sometimes, making a connection between two ideas at the right time and in the right place can help spur someone else to "go deep" with the idea. Even when that doesn't happen, dabbling can bring great personal joy and provide more substantial entertainment than a lot of pop culture.

Academics are among the people these days with a well-defined social opportunity to be explore at least two areas deeply and seriously: their chosen discipline and teaching. This is perhaps the most compelling reason to desire a life in academia. It even offers a freedom to branch out into new areas later in one's career that is not so easily available to people who work in industry.

These days, it's hard to be a polymath even inside one's own discipline. To know all sub-areas of computer science, say, as well as the experts in those sub-areas is a daunting challenge. I think back to the effort my fellow students and I put in over the years that enabled us to take the Ph.D. qualifying exams in CS. I did quite well across the board, but even then I didn't understand operating systems or programming languages as well as experts in those areas. Many years later, despite continued reading and programming, the gap has only grown.

I share the vague sense of loss, expressed by the author of the article linked to above, of a time when one human could master multiple areas of discourse and make fundamental advances to several. We are certainly better off for collective understanding the world so much much better, but the result is a blow to a certain sort of individual mind and spirit.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

January 26, 2014 3:05 PM

One Reason We Need Computer Programs

Code bridges the gap between theory and data. From A few thoughts on code review of scientific code:

... there is a gulf of unknown size between the theory and the data. Code is what bridges that gap, and specifies how edge cases, weird features of the data, and unknown unknowns are handled or ignored.

I learned this lesson the hard way as a novice programmer. Other activities, such as writing and doing math, exhibit the same characteristic, but it wasn't until I started learning to program that the gap between theory and data really challenged me.

Since learning to program myself, I have observed hundreds of CS students encounter this gap. To their credit, they usually buckle down, work hard, and close the gap. Of course, we have to close the gap for every new problem we try to solve. The challenge doesn't go away; it simply becomes more manageable as we become better programmers.

In the passage above, Titus Brown is talking to his fellow scientists in biology and chemistry. I imagine that they encounter the gap between theory and data in a new and visceral way when they move into computational science. Programming has that power to change how we think.

There is an element of this, too, in how techies and non-techies alike sometimes lose track of how hard it is to create a successful start up. You need an idea, you need a programmer, and you need a lot of hard work to bridge the gap between idea and executed idea.

Whether doing science or starting a company, the code teaches us a lot about out theory. The code makes our theory better.

As Ward Cunningham is fond of saying, it's all talk until the tests run.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

January 24, 2014 2:18 PM

Could I be a programmer?

... with apologies to Annie Dillard.

A well-known programmer got collared by a university student who asked, "Do you think I could be a programmer?"

"Well," the programmer said, "I don't know... Do you like function calls?"

The programmer could see the student's amazement. Function calls? Do I like function calls? I am twenty years old and do I like function calls?

If the student had liked function calls, of course, he could begin, like a joyful painter I knew. I asked him how he came to be a painter. He said, "I liked the smell of paint."


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 23, 2014 4:14 PM

The CS Mindset

Chad Orzel often blogs about the physics mindset, the default orientation that physicists tend to have toward the world, and the way they think about and solve problems. It is fun to read a scientist talking about doing science.

Earlier this week I finally read this article about the popularity of CS50, an intro CS course at Harvard. It's all about how Harvard is "is righting an imbalance in academia" by finally teaching practical skills to its students. When I read:

"CS50 made me look at Harvard with new eyes," Guimaraes said.

That is a sea change from what Harvard represented to the outside world for decades: the guardian of a classic education, where the value of learning is for its own sake.

I sighed audibly, loud enough for the students on the neighboring exercise equipment to hear. A Harvard education used to be about learning only for its own sake, but now students can learn practical stuff, too. Even computer programming!

As I re-read the article now, I see that it's not as blunt as that throughout. Many Harvard students are learning computing because of the important role it plays in their futures, whatever their major, and they understand the value of understanding it better. But there are plenty of references to "practical ends" and Harvard's newfound willingness to teach practical skills it once considered beneath it.

Computer programming is surely one of those topics old Charles William Eliot would deem unworthy of inclusion in Harvard's curriculum.

I'm sensitive to such attitudes because I think computer science is and should be more. If you look deeper, you will see that the creators of CS50 think so, too. On its Should I take CS50? FAQ page, we find:

More than just teach you how to program, this course teaches you how to think more methodically and how to solve problems more effectively. As such, its lessons are applicable well beyond the boundaries of computer science itself.

The next two sentences muddy the water a bit, though:

That the course does teach you how to program, though, is perhaps its most empowering return. With this skill comes the ability to solve real-world problems in ways and at speeds beyond the abilities of most humans.

With this skill comes something else, something even more important: a discipline of thinking and a clarity of thought that are hard to attain when you learn "how to think more methodically and how to solve problems more effectively" in the abstract or while doing almost any other activity.

Later the same day, I was catching up on a discussion taking place on the PLT-EDU mailing list, which is populated by the creators, users, and fans of the Racket programming language and the CS curriculum designed in tandem with it. One poster offered an analogy for talking to HS students about how and why they are learning to program. A common theme in the discussion that ensued was to take the conversation off of the "vocational track". Why encourage students to settle for such a limiting view of what they are doing?

One snippet from Matthias Felleisen (this link works only if you are a member of the list) captured my dissatisfaction with the tone of the Globe article about CS50:

If we require K-12 students to take programming, it cannot be justified (on a moral basis) with 'all of you will become professional programmers.' I want MDs who know the design recipe, I want journalists who write their articles using the design recipe, and so on.

The "design recipe" is a thinking tool students learn in Felleisen "How to Design Programs" curriculum. It is a structured way to think about problems and to create solutions. Two essential ideas stand out for me:

  • Students learn the design recipe in the process of writing programs. This isn't an abstract exercise. Creating a working computer program is tangible evidence that student has understood the problem and created a clear solution.
  • This way of thinking is valuable for everyone. We will all better off if our doctors, lawyers, and journalists are able to think this way.

This is one of my favorite instantiations of the vague term computational thinking so many people use without much thought. It is a way of thinking about problems both abstractly and concretely, that leads to solutions that we have verified with tests.

You might call this the CS mindset. It is present in CS50 independent of any practical ends associated with tech start-ups and massaging spreadsheet data. It is practical on a higher plane. It is also present in the HtDP curriculum and especially in the Racket Way.

It is present in all good CS education, even the CS intro courses that more students should be taking -- even if they are only going to be doctors, lawyers, or journalists.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 07, 2014 4:09 PM

Know The Past

In this profile of computational geneticist Jason Moore, the scientist speaks explains how his work draws on work from the 1910s, which may offer computational genetics a better path forward than the work that catapulted genetics forward in the 1990s and 2000s.

Yet despite his use of new media and advanced technology, Moore spends a lot of time thinking about the past. "We have a lot to learn from early geneticists," he says. "They were smart people who were really thinking deeply about the problem."

Today, he argues, genetics students spend too much time learning to use the newest equipment and too little time reading the old genetics literature. Not surprisingly, given his ambivalent attitude toward technology, Moore believes in the importance of history. "Historical context is so important for what we do," he says. "It provides a grounding, a foundation. You have to understand the history in order ... to understand your place in the science."

Anyone familiar with the history of computing knows there is another good reason to know your history: Sometimes, we dream too small these days, and settle for too little. We have a lot to learn from early computer scientists.

I intend to make this a point of emphasis in my algorithms course this spring. I'd like to expose students to important new ideas outside the traditional canon (more on that soon), while at the same time exposing them to some of the classic work that hasn't been topped.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 17, 2013 3:32 PM

Always Have At Least Two Alternatives

Paraphrasing Kent Beck:

Whenever I write a new piece of code, I like to have at least two alternatives in mind. That way, I know I am not doing the worst thing possible.

I heard Kent say something like this at OOPSLA in the late 1990s. This is advice I give often to students and colleagues, but I've never had a URL that I could point them to.

It's tempting for programmers to start implementing the first good idea that comes to mind. It's especially tempting for novices, who sometimes seem surprised that they have even one good idea. Where would a second one come from?

More experienced students and programmers sometimes trust their skill and experience a little too easily. That first idea seems so good, and I'm a good programmer... Famous last words. Reality eventually catches up with us and helps us become more humble.

Some students are afraid: afraid they won't get done if they waste time considering alternatives, or afraid that they will choose wrong anyway. Such students need more confidence, the kind born out of small successes.

I think the most likely explanation for why beginners don't already seek alternatives is quite simple. They have not developed the design habit. Kent's advice can be a good start.

One pithy statement is often enough of a reminder for more experienced programmers. By itself, though, it probably isn't enough for beginners. But it can be an important first step for students -- and others -- who are in the habit of doing the first thing that pops into their heads.

Do note that this advice is consistent with XP's counsel to do the simplest thing that could possibly work. "Simplest" is a superlative. Grammatically, that suggests having at least three options from which to choose!


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

December 16, 2013 2:20 PM

More Fun with Integer "Assembly Language": Brute-Forcing a Function Minimum

Or: Irrational Exuberance When Programming

My wife and daughter laughed at me yesterday.

A few years ago, I blogged about implementing Farey sequences in Klein, a language for which my students at the time were writing a compiler. Klein was a minimal functional language with few control structures, few data types, and few built-in operations. Computing rational approximations using Farey's algorithm was a challenge in Klein that I likened to "integer assembly programming".

I clearly had a lot of fun with that challenge, especially when I had the chance to watch my program run using my students' compilers.

This semester, I am again teaching the compiler course, and my students are writing a compiler for a new version of Klein.

Last week, while helping my daughter with a little calculus, I ran across a fun new problem to solve in Klein:

the task of optimizing cost across the river

There are two stations on opposite sides of a river. The river is 3 miles wide, and the stations are 5 miles apart along the river. We need to lay pipe between the stations. Pipe laid on land costs $2.00/foot, and pipe laid across the river costs $4.00/foot. What is the minimum cost of the project?

This is the sort of optimization problem one often encounters in calculus textbooks. The student gets to construct a couple of functions, differentiate one, and find a maximum or minimum by setting f' to 0 and solving.

Solving this problem in Klein creates some of challenges. Among them are that ideally it involves real numbers, which Klein doesn't support, and that it requires a square root function, which Klein doesn't have. But these obstacles are surmountable. We already have tools for computing roots using Newton's method in our collection of test programs. Over a 3mi-by-5mi grid, an epsilon of a few feet approximates square roots reasonably well.

My daughter's task was to use the derivative of the cost function but, after talking about the problem with her, I was interested more in "visualizing" the curve to see how the cost drops as one moves in from either end and eventually bottoms out for a particular length of pipe on land.

So I wrote a Klein program that "brute-forces" the minimum. It loops over all possible values in feet for land pipe and compares the cost at each value to the previous value. It's easy to fake such a loop with a recursive function call.

The programmer's challenge in writing this program is that Klein has no local variables other function parameters. So I had to use helper functions to simulate caching temporary variables. This allowed me to give a name to a value, which makes the code more readable, but most importantly it allowed me to avoid having to recompute expensive values in what was already a computationally-expensive program.

This approach creates another, even bigger challenge for my students, the compiler writers. My Klein program is naturally tail recursive, but tail call elimination was left as an optional optimization in our class project. With activation records for all the tail calls stored on the stack, a compiler has to use a lot of space for its run-time memory -- far more than is available on our default target machine.

How many frames do we need? Well, we need to compute the cost at every foot along a (5 miles x 5280 feet/mile) rectangle, for a total of 26,400 data points. There will, of course, be other activation records while computing the last value in the loop.

Will I be able to see the answer generated by my program using my students' compilers? Only if one or more of the teams optimized tail calls away. We'll see soon enough.

So, I spent an hour or so writing Klein code and tinkering with it yesterday afternoon. I was so excited by the time I finished that I ran upstairs to tell my wife and daughter all about it: my excitement at having written the code, and the challenge it sets for my students' compilers, and how we could compute reasonable approximations of square roots of large integers even without real numbers, and how I implemented Newton's method in lieu of a sqrt, and...

That's when my wife and daughter laughed at me.

That's okay. I am programmer. I am still excited, and I'd do it again.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

December 08, 2013 11:48 AM

Change Happens When People Talk to People

I finally got around to reading Atul Gawande's Slow Ideas this morning. It's a New Yorker piece from last summer about how some good ideas seem to resist widespread adoption, despite ample evidence in their favor, and ways that one might help accelerate their spread.

As I read, I couldn't help but think of parallels to teaching students to write programs and helping professionals develop software more reliably. We know that development practices such as version control, short iterations, and pervasive testing lead to better software and more reliable process. Yet they are hard habits for many programmers to develop, especially when they have conflicting habits in place.

Other development practices seem counterintuitive. "Pair programming can't work, right?" In these cases, we have to help people overcome both habits of practice and habits of thought. That's a tall order.

Gawande's article is about medical practice, from surgeons to home practitioners, but his conclusions apply to software development as well. For instance: People have an easier time changing habits when the benefit is personal, immediate, and visceral. When the benefit is not so obvious, a whole new way of thinking is needed. That requires time and education.

The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist.

This is certainly true for software developers. (If you replace "germs" with "bugs", it's an even better fit!) Much of the time, developers have to think about evidence the ways scientists do.

This lesson is true not just for surgeons and software developers. It is true for most people, in most ways of life. Sometimes, we all have to be able to think and act like a scientist. I can think of no better argument for treating science as important for all students, just as we do reading and writing.

Other lessons from Gawande's article are more down-to-earth:

Many of the changes took practice for her, she said. She had to learn, for instance, how to have all the critical supplies -- blood-pressure cuff, thermometer, soap, clean gloves, baby respiratory mask, medications -- lined up and ready for when she needed them; how to fit the use of them into her routine; how to convince mothers and their relatives that the best thing for a child was to be bundled against the mother's skin. ...

So many good ideas in one paragraph! Many software development teams could improve by putting them in action:

  • Construct a work environment with essential tools ready at hand.
  • Adjust routine to include new tools.
  • Help collaborators see and understand the benefit of new habits.
  • Practice, practice, practice.

Finally, the human touch is essential. People who understand must help others to see and understand. But when we order, judge, or hector people, they tend to close down the paths of communication, precisely when we need them to be most open. Gawande's colleagues have been most successful when they built personal relationships:

"It wasn't like talking to someone who was trying to find mistakes," she said. "It was like talking to a friend."

Good teachers know this. Some have to learn it the hard way, in the trenches with their students. But then, that is how Gawande's colleagues learned it, too.

"Slow Hands" is good news for teachers all around. It teaches ways to do our job better. But also, in many ways, it tells us that teaching will continue to matter in an age dominated by technological success:

People talking to people is still how the world's standards change.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

December 03, 2013 3:17 PM

The Workaday Byproducts of Striving for Higher Goals

Why set audacious goals? In his piece about the Snowfall experiment, David Sleight says yes, and not simply for the immediate end:

The benefits go beyond the plainly obvious. You need good R&D for the same reason you need a good space program. It doesn't just get you to the Moon. It gives you things like memory foam, scratch-resistant lenses, and Dustbusters. It gets you the workaday byproducts of striving for higher goals.

I showed that last sentence a little Twitter love, because it's something people often forget to consider, both when they are working in the trenches and when they are selecting projects to work on. An ambitious project may have a higher risk of failure than something more mundane, but it also has a higher chance of producing unexpected value in the form of new tools and improved process.

This is also something that university curricula don't do well. We tend to design learning experiences that fit neatly into a fifteen-week semester, with predictable gains for our students. That sort of progress is important, of course, but it misses out on opportunities for students to produce their own workaday byproducts. And that's an important experience for students to have.

It also gives a bad example of what learning should feel like, and what it should do for us. Students generally learn what we teach them, or what we make easiest for them to learn. If we always set before them tasks of known, easily-understood dimensions, then they will have to learn after leaving us that the world doesn't usually work like that.

This is one of the reasons I am such a fan of project-based computer science education, as in the traditional compiler course. A compiler is an audacious enough goal for most students that they get to discover their own personal memory foam.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

November 25, 2013 2:56 PM

The Moment When Design Happens

Even when we plan ahead a bit, the design of a program tends to evolve. Gary Bernhardt gives an example in his essay on abstraction:

If I feel the need to violate the abstraction, I need to reconsider how to modify the boundaries to match that need, rather than violating the boundaries by crossing them.

This is the moment when design happens...

This is a hard design lesson to give students, because it is likely to click with them only after living with the consequences of violating the abstraction. This requires working with the same large program over time, preferably one they are building along the way.

This is one of the reasons I so like our senior project courses. My students are building a compiler this term, which gives them a chance to experience a moment when design happens. Their abstract syntax trees and symbol tables are just the sort of abstractions that invite violation -- and reward a little re-design.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 24, 2013 10:54 AM

Teaching Algorithms in 2014

This spring, I will be teaching the undergraduate algorithms course for first time in nine years, since the semester before I became department head. I enjoy this course. It gives both the students and me opportunities to do a little theory, a little design, and a little programming. I also like to have some fun, using what we learn to play games and solve puzzles.

Nine years is a long time in computing, even in an area grounded in well-developed theory. I will need to teach a different sort of course. At the end of this entry, I ask for your help in meeting this challenge.

Algorithms textbooks don't look much different now than they did in the spring of 2005. Long-time readers of this blog know that I face the existential crisis of selecting a textbook nearly every semester. Picking a textbook requires balancing several forces, including the value they give to the instructor, the value they give to the student during and after the course, and the increasing expense to students.

My primary focus in these decisions is always on net value to the students. I like to write my own material anyway. When time permits, I'd rather write as much as I can for students to read than outsource that responsibility (and privilege) to a textbook author. Writing my lecture notes in depth lets me weave a lot of different threads together, including pointers into primary and secondary sources. Students benefit from learning to read non-textbook material, the sort they will encounter as throughout their careers.

My spring class brings a new wrinkle to the decision, though. Nearly fifty students are enrolled, with the prospect a few more to come. This is a much larger group than I usually work with, and large classes carry a different set of risks than smaller courses. In particular, when something goes wrong in a small section, it is easier to recover through one-on-one remediation. That option is not so readily available for a fifty-person course.

There is more risk in writing new lecture material than in using a textbook that has been tested over time. A solid textbook can be a security blanket as much for the instructor as for the student. I'm not too keen on selecting a security blanket for myself, but the predictability of a text is tempting. There is one possible consolation in such a choice: perhaps subordinating my creative impulses to the design of someone's else's textbook will make me more creative as a result.

But textbook selection is a fairly ordinary challenge for me. The real question is: Which algorithms should we teach in this course, circa 2014? Surely the rise of big data, multi-core processors, mobile computing, and social networking require a fresh look at the topics we teach undergrads.

Perhaps we need only adjust the balance of topics that we currently teach. Or maybe we need to add a new algorithm or data structure to the undergraduate canon. If we teach a new algorithm, or a new class of algorithms, which standard material should be de-emphasized, or displaced altogether? (Alas, the semester is still but fifteen weeks long.)

Please send me your suggestions! I will write up a summary of the ideas you share, and I will certainly use your suggestions to design a better algorithms course for my students.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 10, 2013 9:10 AM

You May Be a Teacher If ...

... you wake groggily at 5:30 on a Sunday morning. You lie in bed, half awake, as your mind begins designing a new class session for your compiler course. You never go back to sleep.

Before you rise, you have a new reading assignment, an opening exercise asking your students to write a short assembly language program, and two larger in-class exercises aimed at helping them make a good start on their compiler's run-time system.

This is a thorny topic. It's been bothering you. Now, you have a plan.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

November 04, 2013 2:41 PM

Those Silly Tests

I love this passage by Mark Dominus in Overlapping Intervals:

This was yet another time when I felt slightly foolish as I wrote the automated tests, assuming that the time and effort I spent on testing this trivial function would be time and effort thrown away on nothing -- and then they detected a real fault. Someday perhaps I'll stop feeling foolish writing tests for functions like this one; until then, many cases just like this one will help me remember that I must write the tests even though I feel foolish doing it.

Even excellent programmers feel silly writing tests sometimes. But they also benefit from writing them. Dominus was saved here by his test-writing habit, or by his sense of right and wrong.

Helping students develop that habit or that moral sense is a challenge. Even so, I rarely come across a situation where my students or I write or run too many tests. I regular encounter cases where we write or run too few.

Dominus's blog entry also a great passage on a larger lesson from that coding experience. In the end, his clever solution to a tricky problem results not from "just thinking" but from deeper thought: from "applying carefully-learned and practiced technique". That's an important form of thinking, too.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 11, 2013 1:42 PM

The Tang of Adventure, and a Lively Appreciation

"After you've learned the twelve times table," John Yarnelle asks, "what else is there to do?"

The concepts of modern mathematics give the student something else to do in great abundance and variety at all levels of his development. Not only may he discover unusual types of mathematical structures where, believe it or not, two and two does not equal four, but he may even be privileged to invent a new system virtually on his own. Far from a sense of stagnation, there is the tang of adventure, the challenge of exploration; perhaps also a livelier appreciation of the true nature of mathematical activity and mathematical thought.

Not only the tang of adventure; students might also come to appreciate what math really is. That's an admirable goal for any book or teacher.

This passage comes from Yarnelle's Finite Mathematical Structures, a 1964 paperback that teaches fields, groups, and algebras with the prose of a delighted teacher. I picked this slender, 66-page gem up off a pile of books being discarded by a retired math professor a decade ago. How glad I am that none of the math profs who walked past that pile bothered to claim it before I happened by.

We could use a few CS books like this, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 08, 2013 3:48 PM

You Never Know What Students Will Remember...

There are a lot of clichés about teachers and their unexpected effects on students. Some of them are true. In the last couple of weeks, I've been reminded twice that students remember the darnedest things, and those things can affect how they behave and live.

First, while at StrangeLoop, I was tagged in a Facebook conversation. A former student had posted an update that involved special-ordering a mini-fridge on-line. The ensuing conversation included the following:

Commenter: "In a college town you have to special order a mini-fridge? Me thinks you are doing it wrong."

Former student: "Yeah, I know... Eugene Wallingford once said the same when I wrote a framework to implement a stack..."

I know this student well and remember his stack framework. He is a smart guy who was learning a lot about CS and programming in a very short time. In his earnestness to apply what he was learning, he had indeed written a flexible, generic framework for a stack in response to a problem that called for twenty, maybe thirty, lines of Java 1.4.

We talked about simplicity, trade-offs, You Aren't Gonna Need It, and other design issues. I believe him when he says that I said, "You must be doing it wrong." That's the sort of thing I would say. I don't remember saying it in that moment, though.

Then, earlier this week, the latest issue of our college newsletter hit the newsstand. An undergrad CS major was interviewed for a "student spotlight" column. It contains this snippet:

Last semester, I went into Dr. Wallingford's office asking why I was not very efficient when answering questions, even though I read the material over and over again. I felt that because I had memorized the books information, I could quickly answer questions.... "He told me, 'when I want to improve my mile time, I run. I don't think about running; I go out and run.' This has really stuck with me since and showed me how books can only do so much. After that, it is the practice and experience that takes you far.

Again, I recall having a conversation with this student about how he could improve his test performance. He, too, is a smart guy who is good at learning the material we see in class and in the readings. What he needed at the time was more practice, to cement the new ideas in his mind, to help him make connections with what he already knew, and to help him write good code faster on exams.

I believe him when he says that I used a running analogy. Over the years, I have found that training for marathons illustrated a lot of useful ideas about learning, especially when it comes to practice and habit. And I do like analogies. But I don't remember the details of what I said in that particular conversation.

These two incidents are salient reminders that we should take seriously the time we spend with our students. They are listening -- and learning.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 07, 2013 12:07 PM

StrangeLoop: Exercises in Programming Style

[My notes on StrangeLoop 2013: Table of Contents]

Crista Lopes

I had been looking forward to Crista Lopes's StrangeLoop talk since May, so I made sure I was in the room well before the scheduled time. I even had a copy of the trigger book in my bag.

Crista opened with something that CS instructors have learned the hard way: Teaching programming style is difficult and takes a lot of time. As a result, it's often not done at all in our courses. But so many of our graduates go into software development for the careers, where they come into contact with many different styles. How can they understand them -- well, quickly, or at all?

To many people, style is merely the appearance of code on the screen or printed. But it's not. It's more, and something entirely different. Style is a constraint. Lopes used images of a few stylistic paintings to illustrate the idea. If an artist limits herself to pointillism or cubism, how can she express important ideas? How does the style limit the message, or enhance it?

But we know this is true of programming as well. The idea has been a theme in my teaching for many years. I occasionally write about the role of constraints in programming here, including Patterns as a Source of Freedom, a few programming challenges, and a polymorphism challenge that I've run as a workshop.

Lopes pointed to a more universal example, though: the canonical The Elements of Programming Style. Drawing on this book and other work in software, she said that programming style ...

  • is a way to express tasks
  • exists at all scales
  • recurs at multiple scales
  • is codified in programming language

For me, the last bullet ties back most directly to idea of style as constraint. A language makes some things easier to express than others. It can also make some things harder to express. There is a spectrum, of course. For example, some OO languages make it easy to create and use objects; others make it hard to do anything else! But the language is an enabler and enforcer of style. It is a proxy for style as a constraint on the programmer.

Back to the talk. Lopes asked, Why is it so important that we understand programming style? First, a style provides the reader with a frame of reference and a vocabulary. Knowing different styles makes us a more effective consumers of code. Second, one style can be more appropriate for a given problem or context than another style. So, knowing different styles makes us a more effective producers of code. (Lopes did not use the producer-consumer distinction in the talk, but it seems to me a nice way to crystallize her idea.)

the cover of Raymond Queneau's Exercises in Style

The, Lopes said, I came across Raymond Queneau's playful little book, "Exercises in Style". Queneau constrains himself in many interesting ways while telling essentially the same story. Hmm... We could apply the same idea to programming! Let's do it.

Lopes picked a well-known problem, the common word problem famously solved in a Programming Pearls column more than twenty-five years. This is a fitting choice, because Jon Bentley included in that column a critique of Knuth's program by Doug McIlroy, who considered both engineering concerns and program style in his critique.

The problem is straightforward: identify and print the k most common terms that occur in a given text document, in decreasing order. For the rest of the talk, Lopes presented several programs that solve the problem, each written in a different style, showing code and highlighting its shape and boundaries.

Python was her language of choice for the examples. She was looking for a language that many readers would be able to follow and understand, and Python has the feel of pseudo-code about it. (I tell my students that it is the Pascal of their time, though I may as well be speaking of hieroglyphics.) Of course, Python has strengths and weaknesses that affect its fit for some styles. This is an unavoidable complication for all communication...

Also, Lopes did not give formal names to the styles she demonstrated. Apparently, at previous versions of this talk, audience members had wanted to argue over the names more than the styles themselves! Vowing not to make that mistake again, she numbered her examples for this talk.

That's what programmers do when they don't have good names.

In lieu of names, she asked the crowd to live-tweet to her what they thought each style is or should be called. She eventually did give each style a fun, informal name. (CS textbooks might be more evocative if we used her names instead of the formal ones.)

I noted eight examples shown by Lopes in the talk, though there may have been more:

  • monolithic procedural code -- "brain dump"
  • a Unix-style pipeline -- "code golf"
  • procedural decomposition with a sequential main -- "cook book"
  • the same, only with functions and composition -- "Willy Wonka"
  • functional decomposition, with a continuation parameter -- "crochet"
  • modules containing multiple functions -- "the kingdom of nouns"
  • relational style -- (didn't catch this one)
  • functional with decomposition and reduction -- "multiplexer"

Lopes said that she hopes to produce solutions using a total of thirty or so styles. She asked the audience for help with one in particular: logic programming. She said that she is not a native speaker of that style, and Python does not come with a logic engine built-in to make writing a solution straightforward.

Someone from the audience suggested she consider yet another style: using a domain-specific language. That would be fun, though perhaps tough to roll from scratch in Python. By that time, my own brain was spinning away, thinking about writing a solution to the problem in Joy, using a concatenative style.

Sometimes, it's surprising just how many programming styles and meaningful variations people have created. The human mind is an amazing thing.

The talk was, I think, a fun one for the audience. Lopes is writing a book based on the idea. I had a chance to review an early draft, and now I'm looking forward to the finished product. I'm sure I'll learn something new from it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 28, 2013 12:17 PM

StrangeLoop: This and That, Volume 2

[My notes on StrangeLoop 2013: Table of Contents]

I am at a really good talk and look around the room. So many people are staring at their phones, scrolling away. So many others are staring at their laptops, typing away. The guy next to me: doing both at the same time. Kudos, sir. But you may have missed the point.

~~~~

Conference talks are a great source of homework problems. Sometimes, the talk presents a good problem directly. Others, watching the talk sets my subconscious mind in motion, and it creates something useful. My students thank you. I thank you.

~~~~

Jenny Finkel talked about the difference between two kinds of recommenders: explorers, who forage for new content, and exploiters, who want to see what's already popular. The former discovers cool new things occasionally but fails occasionally, too. The latter is satisfied most of the time but rarely surprised. As conference goes, I felt this distinction at play in my own head this year. When selecting the next talk to attend, I have to take a few risks if I ever hope to find something unexpected. But when I fail, a small regret tugs at me.

~~~~

We heard a lot of confident female voices on the StrangeLoop stages this year. Some of these speakers have advanced academic degrees, or at least experience in grad school.

~~~~

The best advice I received on Day 1 perhaps came not from a talk but from the building:

The 'Do not Climb on Bears' sign on a Peabody statue

"Please do not climb on bears." That sounds like a good idea most everywhere, most all the time.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

September 10, 2013 3:40 PM

A Laugh at My Own Expense

This morning presented a short cautionary tale for me and my students, from a silly mistake I made in a procmail filter.

Back story: I found out recently that I am still subscribed to a Billy Joel fan discussion list from the 1990s. The list has been inactive for years, or I would have been filtering its messages to a separate mailbox. Someone has apparently hacked the list, as a few days ago it started spewing hundreds of spam messages a day.

I was on the road for a few days after the deluge began and was checking mail through a shell connection to the mail server. Because I was busy with my trip and checking mail infrequently, I just deleted the messages by hand. When I got back, Mail.app soon learned they were junk and filtered them away for me. But the spam was still hitting my inbox on the mail server, where I read my mail occasionally even on campus.

After a session on the server early this morning, I took a few minutes to procmail them away. Every message from the list has a common pattern in the Subject: line, so I copied it and pasted it into a new procmail recipe to send all list traffic to /dev/null :

    :0
    * ^Subject.*[billyjoel]
    /dev/null

Do you see the problem? Of course you do.

I didn't at the time. My blindness probably resulted from a combination of the early hour, a rush to get over to the gym, and the tunnel vision that comes from focusing on a single case. It all looked obvious.

This mistake offers programming lessons at several different levels.

The first is at the detailed level of the regular expression. Pay attention to the characters in your regex -- all of them. Those brackets really are in the Subject: line, but by themselves mean something else in the regex. I need to escape them:

    * ^Subject.*\[billyjoel\]

This relates to a more general piece of problem-solving advice. Step back from individual case you are solving and think about the code you are writing more generally. Focused on the annoying messages from the list, the brackets are just characters in a stream. Looked at from the perspective of the file of procmail recipes, they are control characters.

The second is at the level of programming practice. Don't /dev/null something until you know it's junk. Much better to send the offending messages to a junk mbox first:

    * ^Subject.*\[billyjoel\]
    in.tmp.junk

Once I see that all and only the messages from the list are being matched by the pattern, I can change that line send list traffic where it belongs. That's a specific example of the sort of defensive programming that we all should practice. Don't commit to solutions too soon.

This, too, relates to more general programming advice about software validation and verification. I should have exercised a few test cases to validate my recipe before turning it loose unsupervised on my live mail stream.

I teach my students this mindset and program that way myself, at least most of the time. Of course, the time you most need test cases will be the time you don't write them.

The day provided a bit of irony to make the story even better. The topic of today's session in my compilers course? Writing regular expressions to describe the tokens in a language. So, after my mail admin colleague and I had a good laugh at my expense, I got to tell the story to my students, and they did, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 30, 2013 2:43 PM

Education, Then and Now

... is your job.

Then:

Whatever be the qualifications of your tutors, your improvement must chiefly depend on yourselves. They cannot think or labor for you, they can only put you in the best way of thinking and laboring for yourselves. If therefore you get knowledge, you must acquire it by your own industry.

Now:

School isn't over. School is now. School is blogs and experiments and experiences and the constant failure of shipping and learning.

This works if you interpret "then and now" as in the past (Joseph Priestly, in his dedication of New College, London, 1794) and in the present (Seth Godin, in Brainwashed, from 2008).

It also works if you interpret "then and now" as when you were in school and now that you are out in the real world. It's all real.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 18, 2013 2:22 PM

AP Computer Science in Iowa High Schools

Mark Guzdial posted a blog entry this morning pointing to a Boston Globe piece, Interest in computer science lags in Massachusetts schools. Among the data supporting this assertion was participation in Advanced Placement:

Of the 85,753 AP exams taken by Massachusetts students last year, only 913 were in computing.

Those numbers are a bit out of context, but they got me to wondering about the data for Iowa. So I tracked down this page on AP Program Participation and Performance Data 2012 and clicked through to the state summary report for Iowa. The numbers are even more dismal than Massachusetts's.

Of the 16,413 AP exams taken by Iowa students in 2012, only sixty-nine were in computer science. The counts for groups generally underrepresented in computing were unsurprisingly small, given that Iowa is less diverse than many US states. Of the sixty-nine, fifty-four self-reported as "white", ten as "Asian", and one as "Mexican-American", with four not indicating a category.

The most depressing number of all: only nine female students took the AP Computer Science exam last year in Iowa.

Now, Massachusetts has roughly 2.2 times as many people as Iowa, but even so Iowa compares unfavorably. Iowans took about one-fifth as many AP exams as many Massachusetts students, and for CS the percentage drops to 7.5%. If AP exams indicate much about the general readiness of a state's students for advanced study in college, then Iowa is at a disadvantage.

I've never been a huge proponent of the AP culture that seems to dominate many high schools these days (see, for instance, this piece), but the low number of AP CS exams taken in Iowa is consistent with what I hear when I talk to HS students from around the state and their parents: Iowa schools are not teaching much computer science at all. The university is the first place most students have an opportunity to take a CS course, and by then the battle for most students' attention has already been lost. For a state with a declared goal of growing its promising IT sector, this is a monumental handicap.

Those of us interested in broadening participation in CS face an even tougher challenge. Iowa's demographics create some natural challenges for attracting minority students to computing. And if the AP data are any indication, we are doing a horrible job of reaching women in our high schools.

There is much work to do.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 03, 2013 10:22 AM

Programming for Everyone, Venture Capital Edition

Christina Cacioppo left Union Square Ventures to learn how to program:

Why did I want to do something different? In part, because I wanted something that felt more tangible. But mostly because the story of the internet continues to be the story of our time. I'm pretty sure that if you truly want to follow -- or, better still, bend -- that story's arc, you should know how to write code.

So, rather than settle for her lot as a non-programmer, beyond the accepted school age for learning these things -- technology is a young person's game, you know -- Cacioppo decided to learn how to build web apps. And build one.

When did we decide our time's most important form of creation is off-limits? How many people haven't learned to write software because they didn't attend schools that offered those classes, or the classes were too intimidating, and then they were "too late"? How much better would the world be if those people had been able to build their ideas?

Yes, indeed.

These days, she is enjoying the experience of making stuff: trying ideas out in code, discarding the ones that don't work, and learning new things every day. Sounds like a programmer to me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 26, 2013 2:30 PM

An Opportunity to Learn, Born of Deprivation

Earlier this summer, my daughter was talking about something one of her friends had done with Instagram. As a smug computer weenie, I casually mentioned that she could do that, too.

She replied, "Don't taunt me, Dad."

You see, no one in our family has a cell phone, smart or otherwise, so none of us use Instagram. That's not a big deal for dear old dad, even though (or perhaps because) he's a computer scientist. But she is a teenager growing up in an entirely different world, filled with technology and social interaction, and not having a smart phone must surely seem like a form of child abuse. Occasionally, she reminds us so.

This gave me a chance to explain that Instagram filters are, at their core, relatively simple little programs, and that she could learn to write them. And if she did, she could run them on almost any computer, and make them do things that even Instagram doesn't do.

I had her attention.

So, this summer I am going to help her learn a little Python, using some of the ideas from media computation. At the end of our first pass, I hope that she will be able to manipulate images in a few basic ways: changing colors, replacing colors, copying pixels, and so on. Along the way, we can convert color images to grayscale or sepia tones, posterize images, embed images, and make simple collages.

That will make her happy. Even if she never feels the urge to write code again, she will know that it's possible. And that can be empowering.

I have let my daughter know that we probably will not write code that does as good a job as what she can see in Instagram or Photoshop. Those programs are written by pros, and they have evolved over time. I hope, though, that she will appreciate how simple the core ideas are. As James Hague said in a recent post, then key idea in most apps require relatively few lines of code, with lots and lots of lines wrapped around them to handle edge cases and plumbing. We probably won't write much code for plumbing... unless she wants to.

Desire and boredom often lead to creation. They also lead to the best kind of learning.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 10, 2013 2:41 PM

Unique in Exactly the Same Way

Ah, the idyllic setting of my youth:

When people refer to "higher education" in this country, they are talking about two systems. One is élite. It's made up of selective schools that people can apply to -- schools like Harvard, and also like U.C. Santa Cruz, Northeastern, Penn State, and Kenyon. All these institutions turn most applicants away, and all pursue a common, if vague, notion of what universities are meant to strive for. When colleges appear in movies, they are verdant, tree-draped quadrangles set amid Georgian or Gothic (or Georgian-Gothic) buildings. When brochures from these schools arrive in the mail, they often look the same. Chances are, you'll find a Byronic young man reading "Cartesian Meditations" on a bench beneath an elm tree, or perhaps his romantic cousin, the New England boy of fall, a tousle-haired chap with a knapsack slung back on one shoulder. He is walking with a lovely, earnest young woman who apparently likes scarves, and probably Shelley. They are smiling. Everyone is smiling. The professors, who are wearing friendly, Rick Moranis-style glasses, smile, though they're hard at work at a large table with an eager student, sharing a splayed book and gesturing as if weighing two big, wholesome orbs of fruit. Universities are special places, we believe: gardens where chosen people escape their normal lives to cultivate the Life of the Mind.

I went to a less selective school than the ones mentioned here, but the vague ideal of higher education was the same. I recognized myself, vaguely, in the passage about the tousle-haired chap with a knapsack, though on a Midwestern campus. I certainly pined after a few lovely, earnest young women with a fondness for scarves and the Romantic poets in my day. These days, I have become the friendly, glasses-wearing, always-smiling prof in the recruiting photo.

The descriptions of movie scenes and brochures, scarves and Shelley and approachable professors, reminded me most of something my daughter told me as she waded through recruiting literature from so many schools a few years ago, "Every school is unique, dad, in exactly the same way." When the high school juniors see through the marketing facade of your pitch, you are in trouble.

That unique-in-the-same-way character of colleges and university pitches is a symptom of what lies at the heart of the coming "disruption" of what we all think of as higher education. The traditional ways for a school to distinguish itself from its peers, and even from schools it thinks of as lesser rivals, are becoming less effective. I originally wrote "disappearing", but they are now ubiquitous, as every school paints the same picture, stresses the same positive attributes, and tries not to talk too much about the negatives they and their peers face. Too many schools chasing too few tuition-paying customers accelerates the process.

Trying to protect the ideal of higher education is a noble effort now being conducted in the face of a rapidly changing landscape. However, the next sentence of the recent New Yorker article Laptop U, from which the passage quoted above comes, reminds us:

But that is not the kind of higher education most Americans know. ...

It is the other sort of higher education that will likely be the more important battleground on which the higher ed is disrupted by technology.

We are certainly beginning to have such conversations at my school, and we are starting to hear rumblings from outside. My college's dean and our new university president recently visited the Fortune 100 titan that dominates local industry. One of the executives there gave them several documents they've been reading there, including "Laptop U" and the IPPR report mentioned in it, "An Avalanche is Coming: Higher Education and the Revolution Ahead".

It's comforting to know your industry partners value you enough to want to help you survive a coming revolution. It's also hard to ignore the revolution when your partners begin to take for granted that it will happen.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 05, 2013 1:52 PM

I Fooled Around and Fell in Love

Cue the Elvin Bishop [ video ]...

I smile whenever I see this kind of statement on a website's About page:

Erika Carlson was studying clinical psychology in 2011, when she wrote her first line of Python code. She fell in love with programming, decided to change paths, and is now a software developer at Pillar Technology.

I fell in love upon writing my first line of code, too.

Not everyone will have the same reaction Erika and I had, but it's good that we give people at least an opportunity to learn how to program. Knowing that someone might react this way focuses my mind on giving novice programmers a good enough experience that they can, if they are so inclined.

My teaching should never get in the way of true love.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 03, 2013 2:52 PM

Reflection on Zimmer's Open Letter and Student Questions

Carl Zimmer recently wrote an open letter to science students and teachers to address a disturbing trend: students cold-mailing him to ask for information. He stresses that he and his fellow science writers like to interact with students, but only after students have done some work on their own and have started to develop their own questions. The offending messages can be boiled down to the anti-pattern:

I have homework. I need information from you.

The actual messages can be a lot more entertaining. Be sure to check out at least the opening of his piece, which reprises a multi-day exchange with a random high school student.

I'm not an accomplished popular science writer like Zimmer, but as a CS professor at a public university I receive occasional messages of the sort Zimmer describes from high school students across our region. I try to help out as best I can, and the volume is not so large that I get burnt out trying to guide the student to a more fruitful exchange than "Tell me something" followed by "Here is some information you could have read for yourself".

Fortunately, most of the messages of this sort that reach my inbox come from students in my own courses. Well, it's unfortunate that I receive these messages at all, because they are often a symptom of laziness and presumptuousness. Most often, though, they are simply a sign of bad habits learned in their previous years of schooling. The fortunate part is that I have a chance to help students learn new, better habits of intellectual discipline and discourse.

My approach is a lot like the one Zimmer relates in his exchange with young Davis, so much so that a former student of mine forwarded me a link to Zimmer's piece and said "The exchange at the beginning reminded me of you."

But as a classroom teacher, I have an opportunity that internet celebrities don't: I get to see the question-askers in the classroom two or three times a week and in office hours whenever students avail themselves of the resource. I can also ask students to come see me outside of class for longer conversations.

Talking face-to-face can help students know for certain that I really do care about their curiosity and learning, even as I choose consciously not to fulfill their request until they have something specific to ask. They have to invest something in the conversation, too, and demonstrate their investment by having more to say than simply, "I don't get it."

(Talking face-to-face also helps me have a better idea of when a student has done his or her work and yet still is struggling to the point of not knowing what to say other than "I don't get it." Sometimes, I need to go farther than halfway to help a student in real need of help.)

Most students are open to the idea that college is different from their past experience and that it's time for them to start taking more control of their learning. They learn new habits and are able to participate in meaningful exchanges about material with which they have already engaged.

A few continue to struggle, never moving far beyond "Give me information". I don't know whether their previous schooling has driven them into such a deep rut that they can't get out, or whether even with different schooling they would have been poorly suited for university study. Fortunately, these students are quite few among our student body. Most really do just need a gentle push in the right direction.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 31, 2013 1:44 PM

Quotes of the Week, in Four Dimensions

Engineering.

Michael Bernstein, in A Generation Ago, A Thoroughly Modern Sampling:

The AI Memos are an extremely fertile ground for modern research. While it's true that what this group of pioneers thought was impossible then may be possible now, it's even clearer that some things we think are impossible now have been possible all along.

When I was in grad school, we read a lot of new and recent research papers. But the most amazing, most educational, and most inspiring stuff I read was old. That's often true today as well.

Science.

Financial Agile tweets:

"If it disagrees with experiment, it's wrong". Classic.

... with a link to The Scientific Method with Feynman, which has a wonderful ten-minute video of the physicist explaining how science works. Among its important points is that guessing is huge part of science. It's just that scientists have a way of telling which guesses are right and which are wrong.

Teaching.

James Boyk, in Six Words:

Like others of superlative gifts, he seemed to think the less gifted could do as well as he, if only they knew a few powerful specifics that could readily be conveyed. Sometimes he was right!

"He" is Leonid Hambro, who played with Victor Borge and P. D. Q. Bach but was also well-known as a teacher and composer. Among my best teachers have been some extraordinarily gifted people. I'm thankful for the time they tried to convey their insights to the likes of me.

Art.

Amanda Palmer, in a conference talk:

We can only connect the dots that we collect.

Palmer uses this sentence to explain in part why all art is about the artist, but it means something more general, too. You can build, guess, and teach only with the raw materials that you assemble in your mind and your world. So collect lots of dots. In this more prosaic sense, Palmer's sentence applies to not only to art but also to engineering, science, and teaching.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

May 26, 2013 9:45 AM

Programming Magic and Business Skeuomorphism

Designer Craig Mod offers Marco Arment's The Magazine as an exemplar of Subcompact Publishing in the digital age: "No cruft, all substance. A shadow on the wall."; a minimal disruptor that capitalizes on the digital medium without tying itself down with the strictures of twentieth-century hardcopy technology.

After detailing the advantages of Arment's approach, Mod points out the primary disadvantage: you have to be able to write an iOS application. Which leads to this gem

The fact that Marco -- a programmer -- launched one of the most 'digitally indigenous' contemporary tablet publications is indicative of two things:
  1. Programmers are today's magicians. In many industries this is obvious, but it's now becoming more obvious in publishing. Marco was able to make The Magazine happen quickly because he saw that Newsstand was underutilized and understood its capabilities. He knew this because he's a programmer. Newsstand wasn't announced at a publishing conference. It was announced at the WWDC.
  2. The publishing ecosystem is now primed for complete disruption.

If you are a non-programmer with ideas, don't think I just need a programmer; instead think, I need a technical co-founder. A lot of people think of programming as Other, as a separate world from what they do. Entrepreneurs such as Arment, and armies of young kids writing video games and apps for their friends, know instead that it is a tool they can use to explore their interests.

Mod offers an a nice analogy from the design world to explain why entrenched industry leaders and even prospective entrepreneurs tend to fall into the trap of mimicking old technology in their new technologies: business skeuomorphism.

For example, designers "bring the mechanical camera shutter sound to digital cameras because it feels good" to users. In a similar way, a business can transfer a decision made under the constraints of one medium or market into a new medium or market in which the constraints no longer apply. Under new constraints, and with new opportunities, the decision is no longer a good one, let alone necessary or optimal.

As usual, I am thinking about how these ideas relate to the disruption of university education. In universities, as in the publishing industry, business skeuomorphism is rampant. What is the equivalent of the Honda N360 in education? Is it Udacity or Coursera? Enstitute? Or something simpler?


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 25, 2013 12:12 PM

Getting the Right Relationship with a Class of Students

Thinking back over the semester, both about my course and about conversations I've had with colleagues about theirs, I'm reminded of a short passage from Keith Johnstone's "Impro: Improvisation and the Theatre":

There seems no doubt that a group can make or break its members, and that it's more powerful than the individuals in it. A great group can propel its members forward so that they achieve amazing things. Many teachers don't seem to think that manipulating a group is their responsibility at all. If they're working with a destructive, bored group, they just blame the students for being 'dull', or uninterested. It's essential for the teacher to blame himself if the group isn't in a good state.

It is hard to predict when a group of students will come together on its own and propel its members forward to achieve amazing things. Sometimes it happens almost spontaneously, as if by magic. I doubt it's magic, though, as much as students with the right attitudes and with skills for working with others. Sometimes, we don't know our students as well as we think. In any case, we teachers love to be around when the lightning strikes.

Bad teachers too often blame their students, or external circumstances, when a class dynamic turns destructive. Good teachers know it's their job to mold the group. They pay attention to attitudes and interactions, and they work to steer the group toward cohesiveness and accomplishment. Great teachers deliver consistently, within the constraints of their environments.

I'd love to say that I aspire to greatness, but that's too lofty a goal. These days, I still have to work hard just to get a peek at good every now and then. Being aware that the group dynamic is part of the teacher's responsibility is a good first step.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 23, 2013 3:57 PM

Bad Examples Are Everywhere

... even in the Java class libraries.

Earlier today, @fogus joked:

The java.awt.Point class was only created because someone needed a 1st example to show how to make an object for a book they were writing.

My response was only half-joking:

And I use it as an example of how not to make a class.

If you have ever seen the Point class, you might understand why. Two public instance variables, seven methods for reading and writing the instance variables, and only one method (translate) that could conceivably be considered a behavior. But it's not; it's just a relative writer.

When this is the first class we show our students and ask them to use, we immediately handicap them with an image of objects as buckets of data and programs as manipulation of values. We may as well teach them C or Pascal.

This has long been a challenger for teaching OOP in CS1. If a class has simple enough syntax for the novice programmer to understand, it is generally a bad example of an object. If a class has interesting behavior, it is generally too complex for the novice programmer to understand.

This is one of the primary motivations for authors to create frameworks for their OOP/CS1 textbooks. One of the earliest such frameworks I remember was the Graphics Package (GP) library in Object-Oriented Programming in Pascal, by Connor, Niguidula, and van Dam. Similar approaches have been used in more recent books, but the common thread is an existing set of classes that allow users to use and create meaningful objects right away, even as they learn syntax.

a hadrosaurus, which once roamed the land with legacy CS profs

A lot of these frameworks have a point objects as egregious as Java's GP included. But with these frameworks, the misleading Point class need not be the first thing students see, and when seen they are used in a context that consist of rich objects interacting as objects should.

These frameworks create a new challenge for the legacy CS profs among us. We like to "begin with the fundamentals" and have students write programs "from scratch", so that they "understand the entire program" from the beginning. Because, you know, that's the way we learned to program.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 21, 2013 3:05 PM

Exercises in Exercises in Style

I just registered for Strange Loop 2013, which doesn't happen until this fall. This has become a popular conference, deservedly so, and it didn't seem like a good idea to wait to register and risk being shut out.

One of the talks I'm looking forward to is by Crista Lopes. I mentioned Crista in a blog entry from last year's Strange Loop, for a talk she gave at OOPSLA 2003 that made an analogy between programming language and natural language. This year, she will give a talk called Exercises in Style that draws inspiration from a literary exercise:

Back in the 1940s, a French writer called Raymond Queneau wrote an interesting book with the title Exercises in Style featuring 99 renditions of the exact same short story, each written in a different style. This talk will shamelessly do the same for a simple program. From monolithic to object-oriented to continuations to relational to publish/subscribe to monadic to aspect-oriented to map-reduce, and much more, you will get a tour through the richness of human computational thought by means of implementing one simple program in many different ways.

If you've been reading this blog for long, you can image how much I like this idea. I even checked Queneau's book out of the library and announced on Twitter my plan to read it before the conference. From the response I received, I gather a lot of conferences attendees plan to do the same. You gotta love the audience Strange Loop cultivates.

I actually have a little experience with this idea of writing the same program in multiple styles, only on a much smaller scale. For most of the last twenty years, our students have learned traditional procedural programming in their first-year sequence and object-oriented programming in the third course. I taught the third course twice a year for many years. One of things I often did early in the course was to look at the same program in two forms, one written in a procedural style and one written in OOP. I hoped that the contrast between the programs would help them see the contrast between how we think about programs in the two styles.

I've been teaching functional programming regularly for the last decade, after our students have seen procedural and OO styles in previous courses, but I've rarely done the "exercises in style" demo in this course. For one thing, it is a course on languages and interpreters, not a course on functional programming per se, so the focus is on getting to interpreters as soon as possible. We do talk about differences in the styles in terms of their concepts and the underlying differences (and similarities!) in their implementation. But I think about doing so every time I prep the next offering of the course.

Not doing "exercises in style" can be attractive, too. Small examples can mislead beginning students about what is important, or distract them with concepts they'd won't understand for a while. The wrong examples can damage their motivation to learn. In the procedural/object-oriented comparison, I have had reasonable success in our OOP course with a program for simple bank accounts and a small set of users. But I don't know how well this exercise would work for a larger and more diverse set of styles, at least not at a scale I could use in our courses.

I thought of this when @kaleidic tweeted, "I hope @cristalopes includes an array language among her variations." I do, too, but my next thought was, "Well, now Crista needs to use an example problem for which an array language is reasonably well-suited." If the problem is not well suited to array languages, the solution might look awkward, or verbose, or convoluted. A newcomer to array languages is left to wonder, "Is this a problem with array languages, or with the example?" Human nature as it is, too many of us are prone to protect our own knowledge and assume that something is wrong with the new style.

An alternative approach is to get learners to suspend their disbelief for a while, learn some nuts and bolts, and then help them to solve bigger problems using the new style. My students usually struggle with this at first, but many of them eventually reach a point where they "get" the style. Solving a larger problem gives them a chance to learn the advantages and disadvantages of their new style, and retroactively learn more about the advantages and disadvantages of the styles they already know well. These trade-offs are the foundation of a really solid understanding of style.

I'm really intrigued by Queneau's idea. It seems that he uses a small example not to teach about each style in depth but rather to give us a taste. What does each style feel like in isolation? It is up to the aspiring writer to use this taste as a starting point, to figure out where each style might take you when used for a story of the writer's choosing.

That's a promising approach for programming styles, too, which is one of the reasons I am so looking forward to Crista's talk. As a teacher, I am a shameless thief of good ideas, so I am looking forward to seeing the example she uses, the way she solves it in the different styles, and the way she presents them to the crowd.

Another reason I'm looking forward to the talk is that I love programs, and this should be just plain fun.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 14, 2013 2:35 PM

An Unusual Scheme-ism This Semester

While grading final projects and final exams in my Programming Languages courses this week, I noticed my students using an unusual Scheme construction. Instead of writing:

(cons new-item result-of-recursive-call)

... a few troubled souls fell into the habit of writing:

(append (list new-item) result-of-recursive-call)

We have been using Scheme in this course for many years, and this may be the first semester that I've ever seen this. It is certainly the first time I saw it enough to notice it and wonder, "What's up with that?"

Of course, in previous semesters, a few students have always used a similar construction when putting new items at the back of the list:

(append result-of-recursive-call (list new-item))

This semester, though, a handful used (append ... (list ... all over the place. When I asked one of them about it after seeing it on an earlier homework assignment, he didn't have much to say, so we talked about the more direct solution a bit and moved on.

But after seeing it multiple times on the final exam, I have to wonder what is going on. Maybe they simply adopted this form as a mental shorthand, a one-size-fits-all tool that gave them one fewer thing to think about when writing code. Maybe, though, it masks a systematic error in thinking that I might address.

Out of curiosity, I ran a quick experiment to see what sort of time advantage (cons ... has over (append ... (list .... Surely, initializing the cdr field of a new list's cons cell to null, then asking append to walk there and set it to point to the other list wastes time. But how much?

In Racket, the penalty is actually quite small, in the ballpark of 40ms for every million executions. That's not surprising, given that append must examine only one cell before reaching the end of its first argument. This stands in contrast to the more general case of wrapping an O(n) append operation inside another O(n) operation, which my students encounter when learning how to implement more efficient algorithms using an accumulator variable.

More important than any speed penalty, though, is the misunderstanding that enables or encourages a student to think about reassembling a structure of this sort with anything other than a straightforward call to cons. Seeing an anti-pattern of this sort in my students' code makes me want to uncover the pathology at play and root it out.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 10, 2013 4:03 PM

Using Language to Understand a Data Set

Today was our twice-annual undergraduate research presentation day. Every B.S. student must do an undergraduate research project and present the results publicly. For the last few years, we have pooled the presentations on the morning of the Friday in finals week, after all the exams are given and everyone has a chunk of time free to present. It also means that more students and professors can attend, which makes for more a more engaging audience and a nice end to everyone's semester.

I worked with one undergraduate research student this spring. As I mentioned while considering the role of parsing in a compilers course, this student was looking for patterns in several years of professional basketball play-by-play data. His ultimate goal was to explore ways of measuring the impact of individual defensive performance in the NBA -- fairly typical MoneyBall stuff applied to an skill that is not well measured or understood.

This project fell into my hands serendipitously. The student had approached a couple of other professors, who upon hearing the word "basketball" immediately pointed him to me. Of course, the project is really a data analytics project that just happens to involve a dataset from basketball, but... Fortunately, I am interested in both the approach and the domain!

As research sometimes does, this problem led the student to a new problem first. In order to analyze data in the way he wanted, he needed data of a particular sort. There is plenty of play-by-play data available publicly on the web, but it's mostly prepared for presentation in HTML. So he first had to collect the data by scraping the web, and then organize it into a data format amenable to analysis.

This student had taken my compiler course the last time around, and his ability to parse several files of similar but just-different-enough data proved to be invaluable. As presented on sites like nba.com, the data is no where near ready to be studied.

As the semester wore on, he and I came to realize that his project this semester wouldn't be the data analysis he originally intended to do. It was a substantial project simply to make sense of the data he had found.

As he presented his work today, I realized something further. He was using language to understand a data set.

He started by defining a grammar to model the data he found, so that he could parse it into a database. This involved recognizing categories of expression that were on the surface of the data, such as made and missed field goals, timeouts, and turnovers. When he ran this first version of his parser, he found unhandled entries and extended his grammar.

Then he looked at the semantics of the data and noticed discrepancies deeper in the data. The number of possessions his program observed in a game differed from the expected values, sometimes wildly and with no apparent pattern.

As we looked deeper, we realized that the surface syntax of the data often obscured some events that would extend or terminate a possession. A simple example is a missed FT, which sometimes ends a possession and sometimes not. It depends in part on the next event in the timeline.

To handle these case, the student created new syntactic categories that enabled his parser to resolve such issues by recognized composite events in the data. As he did this, his grammar grew, and his parser became better at building a more accurate semantic model of the game.

This turned out to be a semester-long project in its own right. He's still not done and intends to continue with this research after graduation. We were both a bit surprised at how much effort it took to corral the data, but in retrospect we should not have been too surprised. Data are collected and presented with many different purposes in mind. Having an accurate deep model of the underlying the phenomenon in question isn't always one of them.

I hope the student was pleased with his work and progress this semester. I was. In addition to its practical value toward solving a problem of mutual interest, it reminded me yet again of the value of language in understanding the world around us, and the remarkable value that the computational ideas we study in computer science have to offer. For some reason, it also reminded me, pleasantly, of the Racket Way. As I noted in that blog entry, this is really the essence of computer science.

Of course, if some NBA team were to give my student the data he needs in suitable form, he could dive into the open question of how better to measure individual defensive performance in basketball. He has some good ideas, and the CS and math skills needed to try them out.

Some NBA team should snatch this guy up.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 30, 2013 4:53 PM

Exceleration

A student stopped in for a chat late last week to discuss the code he was writing for a Programming Languages assignment. This was the sort of visit a professor enjoys most. The student had clearly put in plenty of time on his interpreter and had studied the code we had built in class. His code already worked. He wanted to talk about ways to make his code better.

Some students never reach this point before graduation. In Coders at Work, Bernie Cosell tells a story about leading teams of new hires at BBN:

I would get people -- bright, really good people, right out of college, tops of their classes -- on one of my projects. And they would know all about programming and I would give them some piece of the project to work on. And we would start crossing swords at our project-review meetings. They would say, "Why are you complaining about the fact that I have my global variables here, that I'm not doing this, that you don't like the way the subroutines are laid out? The program works."

They'd be stunned when I tell them, "I don't care that the program works. The fact that you're working here at all means that I expect you to be able to write programs that work. Writing programs that work is a skilled craft and you're good at it. Now, you have to learn how to program.

I always feel that we have done our students well if we can get them to the point of caring about their craft before they leave us. Some students come to us already having this mindset, which makes for a very different undergraduate experience. Professors enjoy working these students, too.

But what stood out to me most from this particular conversation was something the student said, something to this effect:

When we built the lexical addresser in class a few weeks ago, I didn't understand the idea and I couldn't write it. So I studied it over and over until I could write it myself and understand exactly why it worked. We haven't looked at lexical addressing since then, but the work I did has paid off every time we've written code to process programs in our little languages, including this assignment. And I write code more quickly on the exams now, too.

When he finished speaking, I could hardly contain myself. I wish I could bottle this attitude and give to every student who ever thinks that easy material is an opportunity to take it easy in a course for a while. Or who thinks that the best response to difficult material is to wait for something easier to come along next chapter.

Both situations are opportunities to invest energy in the course. The returns on investment are deeper understanding of the material, sharper programming skills, and the ability to get stuff done.

This student is reaping now the benefits of an investment he made five weeks ago. It's a gift that will keep on giving long after this course is over.

I encourage students to approach their courses and jobs in this way, but the message doesn't always stick. As Clay Stone from City Slickers might say, I'm happy as a puppy with two peters whenever it does.

While walking this morning, I coined a word for this effect: exceleration. It's a portmanteau combining "excellence" and "acceleration", which fits this phenomenon well. As with compound interest and reinvested dividends, this sort of investment builds on its self over time. It accelerates learners on their path to mastering their craft.

Whatever you call it, that conversation made my week.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 25, 2013 4:03 PM

Toward a Course on Reading Code

Yesterday, I tweeted absent-mindedly:

It would be cool to teach a course called "Reading Code".

Reading code has been on mind for a few months now, as I've watched my students read relatively small pieces of code in my Programming Languages course and as I've read a couple of small libraries while reading the exercise bike. Then I ran across John Regehr's short brainstorm on the topic, and something clicked. So I tweeted.

Reading code, or learning to do it, must be on the minds of a lot people, because my tweet elicited quite a few questions and suggestions. It is an under-appreciated skill. Computer science programs rarely teach students how to do it, and then usually only implicitly, by hearing a prof or other students talk about code they've read.

Several readers wanted to know what the course outline would be. I don't know. That's one of the things about Twitter or even a blog: it is easy to think out loud absent-mindedly without having much content in mind yet. It's also easier to express an interest in teaching a course than to design a good one.

Right now, I have only a few ideas about how I'd start. Several readers suggested Code Reading by Spinellis, which is the only textbook I know on then topic. It may be getting a little old these days, but many of the core techniques are still sound.

I was especially pleased that someone recommended Richard Gabriel's idea for an MFA in Software, in which reading plays a big role. I've used some of Dick's ideas in my courses before. Ironically, the last time I mentioned the MFA in Software idea in my blog was in the context of a "writing code" course, at the beginning of a previous iteration of Programming Languages!

That's particularly funny to me because someone replied to my tweet about teaching a course called "Reading Code" with:

... followed by a course "Writing Readable Code".

Anyone who has tried to grade thirty evolving language interpreters each week appreciates this under-appreciated skill.

Chris Demwell responded to my initial tweet with direct encouragement: Write the course, or at least an outline, and post it. I begged indulgence for lack of time as the school year ends and said that maybe I can take a stab this summer. Chris's next tweet attempted to pull me into the 2010s:

1. Write an outline. 2. Post on github. 3. Accept pull requests. Congrats, you're an editor!

The world has indeed changed. This I will do. Watch for more soon. In the meantime, feel free to e-mail me your suggestions. (That's an Old School pull request.)


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 21, 2013 10:25 AM

Catnip for Programmers

This morning, Maciej Ceglowski of Pinboard introduced me to the Matasano crypto challenges, a set of exercises created by Thomas Ptacek and his team as a tool for teaching programmers a little about cryptography, some of its challenges, and the need for more awareness of how easy it is to do it wrong. With the coming break from the grind of the academic year, I plan on giving them a try.

After having completed the exercises himself, Ceglowski observes:

Crypto is like catnip for programmers. It is hard to keep us away from it, because it's challenging and fun to play with. And programmers respond very badly to the insinuation that they're not clever enough to do something. We see the F-16 just sitting there, keys in the ignition, no one watching, lights blinking, ladder extended. And some infosec nerd is telling us we can't climb in there, even though we just want to taxi around a little and we've totally read the manual.

I've noticed this with a number of topics in computing. In addition to cryptography, data compression and sorting/searching are sirens to the best programmers among our students. "What do you mean we can't do better?"

For many undergrads, the idea of writing a compiler seems a mystery. Heck, I admit to my students that even after years of teaching the course I remain in awe of my language tools and the people who build them. This challenge keeps a steady if relatively small stream of programmers flowing into our "Translation of Programming Languages" project course.

One of the great things about all these challenges is that after we tackle them, we have not only the finished product in hand but also what we learn about the topic -- and ourselves -- along the way. Then we are ready for a bigger challenge, and another program to write.

For CS faculty, catnip topics are invaluable ways to draw more students into the spell of computing, and more deeply. We are always on the lookout.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 10, 2013 4:03 PM

Minor Events in the Revolution at Universities

This morning I ran across several articles that had me thinking yet again about the revolution I see happening in the universities (*).

First, there was this recent piece in the New York Times about software that grades essays. Such software is probably essential for MOOCs in many disciplines, but it would also be useful in large lecture sections of traditional courses at many universities. The software isn't perfect, and skeptics abound. But the creator of the EdX software discussed in the article says:

This is machine learning and there is a long way to go, but it's good enough and the upside is huge.

It's good enough, and the upside is huge. Entrenched players scoff. Classic disruption at work.

Then there was this piece from the Nieman Journalism Lab about an online Dutch news company that wants readers to subscribe to individual journalists. Is this really news in 2013? I read a lot of technical and non-technical material these days via RSS feeds from individual journalists and bloggers. Of course, that's not the model yet for traditional newspapers and magazines.

... but that's the news business. What about the revolution in universities? The Nieman Lab piece reminded me of an old article in Vanity Fair about Politico, a news site founded by a small group of well-known political journalists who left their traditional employers to start the company. They all had strong "personal brands" and journalistic credentials. Their readers followed them to their new medium. Which got me to thinking...

What would happen if the top 10% of the teachers at Stanford or Harvard or Williams College just walked out to start their own university?

Of course, in the time since that article was published, we have seen something akin to this, with the spin-off of companies like Coursera and Udacity. However, these new education companies are partnering with traditional universities and building off the brands of their partners. At this point in time, the brand of a great school still trumps the individual brands of most all its faculty. But one can imagine a bolder break from tradition.

What happens when technology gives a platform to a new kind of teacher who bypasses the academic mainstream to create and grow a personal brand? What happens when this new kind of teacher bands together with a few like-minded renegades to use the same technology to scale up to the size of a traditional university, or more?

That will never happen, or so many of us in the academy are saying. This sort of thinking is what makes the Dutch news company mentioned above seem like such a novelty in the world of journalism. Many journalists and media companies, though, now recognize the change that has happened around them.

Which leads to a final piece I read this morning, a short blog entry by Dave Winer about Ezra Klein's epiphany on how blogging and journalism are now part of a single fabric. Winer says:

It's tragic that it took a smart guy like Klein so long to understand such a basic structural truth about how news, his own profession, has been working for the last 15 years.

I hope we aren't saying the same thing about the majority of university professors fifteen or twenty years from now. As we see in computers that grade essays, sometimes a new idea is good enough, and the upside is huge. More and more people will experiment with good-enough ideas, and even ideas that aren't good enough yet, and as they do the chance of someone riding the upside of the wave to something really different increases. I don't think MOOCs are a long-term answer to any particular educational problem now or in the future, but they are one of the laboratories in which these experiments can be played out.

I also hope that fifteen or twenty years from now someone isn't saying about skeptical university professors what Winer says so colorfully about journalists skeptical of the revolution that has redefined their discipline while they worked in it:

The arrogance is impressive, but they're still wrong.

~~~~

(*).   Nearly four years later, Revolution Out There -- and Maybe In Here remains one of my most visited blog entries, and one that elicits more reader comments than most. I think it struck a chord.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 07, 2013 2:23 PM

To Solve Or Not To Solve

This week I bumped into a familiar tension that arises whenever I assign a new homework problem to students.

Path 1. I assign a new problem before I solve it myself. The problem turns out to too difficult, or includes a complication or two that I didn't anticipate. Students become frustrated, especially the weaker ones. Because of the distraction, most everyone misses the goal of the assignment.

Path 2. I solve a new problem before I assign it. I run into a couple of unexpected wrinkles, things that make the problem less straightforward than I had planned. "This will distract my students," I think, so I iron out a wrinkle here and eliminate a distraction there. Then I assign the problem to my students. The result feels antiseptic, unchallenging. Some students are bored with the problem, especially the stronger ones.

In this week's case, I followed the second path. I assigned a new problem in my Programming Languages course before solving it myself. When I sat down to whip up my solutions, I realized the problem held a couple of surprises for me. I had a lot of fun with those surprises and was happy with the code that resulted. But I also realized that my students will have to do more fancy thinking than I had planned on. Nothing too tricky, just a couple of non-obvious steps along the way to an obvious solution.

Will that defeat the point of assigning the problem? Will my stronger students be happy for the challenge? Will my weaker students be frustrated, or angry at their inability to solve a problem that looks so much like another they have already solved? Will the problem help students learn something new about the topic at hand?

I ultimately believe that students benefit strongly from the challenge of problems that have not been pre-digested for them by the instructor or textbook. When we take too much care in preparing assignments ahead of time, we rob students of the joy of solving a problem that has rough edges.

Joy, and the sense of accomplishment that comes from taking on a real challenge. Problems in the world typically have rough edges, or throw wrinkles at us when we aren't looking for them.

Affording students the joy of solving a real problem is perhaps especially important for the stronger students, who often have to live with a course aimed at the middle of the curve. But it's just as important for the rest of the class. Skill and confidence grow out of doing something worth doing, even if it takes a little help from the professor.

I continue to use both approaches when I create assignments, sometimes solving a new problem first and sometimes trusting my instinct. The blend keeps assignments from veering too far in one direction or the other, I think, which gives students some balance.

However, I am usually most happy when I let a new problem surprise us all. I try to keep these problems on track by paying closer attention to students as they begin to work on them. When we run into an unexpected rough edge, I try to intervene with just enough assistance to get them over the complexity, but not so much as to sterilize the problem.

Finding the right balance between too clean and too rough is a tough problem for a teacher to solve. It is a problem worth solving, and a source of both disappointment and joy.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

April 05, 2013 3:21 PM

The Role of Parsing in a Compilers Course

I teach compilers again this fall. I'm looking forward to summer, when I'll have a chance to write some code and play with some ideas for the course.

This morning I thought a bit about a topic that pops up every time I prep the course. The thoughts were prompted by a tweet from James Coglan, which said "Really wish this Compilers course weren't half about parsing. ... get on with semantics." The ellipsis is mine; James's tweet said something about using lex/yacc to get something up and running fast. Then, presumably, we could get on to the fun of semantics.

This is a challenge for my compilers course, too. I know I don't want to rush through scanning and parsing, yet I also wish I had more time for static analysis, optimization, and code generation. Even though I know the value of parsing, I wish I had equal time for a lot of other cool topics.

Geoff Wozniak's response expressed one of the reasons parsing still has such a large role in my compilers course, and so many others:

Parsing is like assembly language: it seems superfluous at the time, but provides deep understanding later. It's worth it.

That's part of what keeps me from de-emphasizing it in my course. Former students often report back to me that they have used their skill at writing parsers frequently in their careers, whether for parsing DSLs they whip up or for making sense of a mess of data they want to process.

A current student is doing an undergrad research project that involves finding patterns in several years of professional basketball play-by-play data, and his ability to parse several files of similar but just-different-enough data proved invaluable. Of course, he was a bit surprised that corralling the data took as much effort as it did. Kind of like how scanning and parsing are such a big part of a compiler project.

I see now that James has tweeted a retraction:

... ppl are RTing something I said about wishing the Compilers course would get done with parsing ASAP. Don't believe this any more.

I understand the change of opinion. After going writing a compiler for a big language and learning the intricacies that are possible, it's easy to reach Geoff's position: a deep understanding comes from the experience.

That doesn't mean I don't wish my semester were twenty weeks instead of fifteen, so that I could go deeper on some other topics, too. I figure there will always be some tension in the design of the course for just that reason.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 30, 2013 8:43 AM

"It's a Good Course, But..."

Earlier this week I joined several other department heads to eat lunch with a bunch of high school teachers who were on campus for the Physics Olympics. The teachers were talking shop about the physics courses at their schools, and eventually the conversation turned to AP Physics. One of the teachers said, "It's a good course, but..."

A lot of these teachers would rather not offer AP Physics at all. One teacher described how in earlier days they were able to teach an advanced physics course of their own design. They had freedom to adapt to the interest of their students and to try out new ideas they encountered at conferences. Even though the advanced physics course had first-year physics as a prerequisite, they had plenty of students interested and able to take the second course.

The introduction of AP Physics created some problems. It's a good course, they all agreed, but it is yet another AP course for their students to take, and yet another AP exam for the students to prepare for. Most students can't or don't want to take all the AP courses, due to the heavier workload and often grueling pace. So in the end, they lose potential students who choose not to take the physics class.

Several of these teachers tried to make this case to heads of their divisions or to their principals, but to no avail.

This makes me sad. I'd like to see as many students taking science and math courses in high school as possible, and creating unnecessary bottlenecks hurts that effort.

There is a lot of cultural pressure these days to accelerate the work that HS students do. K-12 school districts and their administrators see the PR boon of offering more, and more advanced courses. State legislators are creating incentives for students to earn college credit while in high school, and funding for schools can reflect that. Parents love the idea of their children getting a head start on college, both because it might save money down the line and because they earn some vicarious pleasure in the achievement of their children.

On top of all this, the students themselves often face a lot of peer pressure from their friends and other fellow students to be doing and achieving more. I've seen that dynamic at work as my daughters have gone through high school.

Universities don't seem as keen about AP as they used to, but they send a mixed message to parents and students. On the one hand, many schools give weight in their admission decisions to the number of AP courses completed. This is especially true with more elite schools, which use this measure as a way to demonstrate their selectivity. Yet many of those same schools are reluctant to give full credit to students who pass the AP exam, at least as major credit, and require students to take their intro course anyway.

This reluctance is well-founded. We don't see any students who have taken AP Computer Science, so I can't commit on that exam but I've talked with several Math faculty here about their experiences with calculus. They say that, while AP Calculus teaches a lot of good material, but the rush to cover required calculus content often leaves students with weak algebra skills. They manage to succeed in the course despite these weaknesses, but when they reach more advanced university courses -- even Calc II -- these weaknesses come back to haunt them.

As a parent of current and recent high school students, I have observed the student experience. AP courses try to prepare students for the "college experience" and as a result cover a lot of material. The students see them as grueling experiences, even when they enjoy the course content.

That concerns me a bit. For students who know they want to be math or science majors, these courses are welcome challenges. For the rest of the students, who take the courses primarily to earn college credit or to explore the topic, these courses are so grueling that this dampen the fun of learning.

Call me old-fashioned, but I think of high school as a time to learn about a lot of different things, to sample broadly from all areas of study. Sure, students should build up the skills necessary to function in the workplace and go to college, but the emphasis should be on creating a broadly educated citizen, not training a miniature college student. I'd rather students get excited about learning physics, or math, or computer science, so that they will want to dive deeper when they get to college.

A more relaxed, more flexible calculus class or physics course might attract more students than a grueling AP course. This is particularly important at a time when everyone is trying to increase interest in STEM majors.

My daughters have had a lot of great teachers, both in and out of their AP courses. I wish some of those teachers had had more freedom to spark student interest in the topic, rather than student and teacher alike facing the added pressure of taking the AP exam, earning college credits, and affecting college admission decisions

It's a good course, but feel the thrill first.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

March 22, 2013 9:17 AM

Honest Answers: Learning APL

After eighteen printed pages showing the wonders of APL in A Glimpse of Heaven, Bernard Legrand encourages programmers to give the language a serious look. But he cautions APL enthusiasts not to oversell the ease of learning the language:

Beyond knowledge of the basic elements, correct APL usage assumes knowledge of methods for organising data, and ways specific to APL, of solving problems. That cannot be learnt in a hurry, in APL or any other language.

Legrand is generous in saying that learning APL takes the same amount of time as learning any other language. In my experience, both as a learning of language and as a teacher of programmers, languages and programming styles that are quite different from one's experience take longer than more familiar topics. APL is one of those languages that requires us to develop entirely new ways of thinking about data and process, so it will take most people longer to learn than yet another C-style imperative language or OO knock-off.

But don't be impatient. Wanting to move too quickly is a barrier to learning and performing at all scales, and too often leads us to give up too soon. If you give up on APL too soon, or on functional programming, or OOP, you will never get to glimpse the heaven that experienced programmers see.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 08, 2013 3:50 PM

Honest Answers: Debugging

We have all been there:

Somehow, at some point in every serious programming project, it always comes down to the last option: stare at the code until you figure it out. I wish I had a better answer, but I don't. Anyway, it builds character.

This is, of course, the last resort. We need to teach students better ways to debug before they have to fall back on what looks a lot like wishful thinking. Fortunately, John Regehr lists this approach as the last resort in his lecture on How to Debug. Before he tells students to fall back to the place we all have to fall back to occasionally, he outlines an explicit, evidence-driven process for finding errors in a program.

I like that Regehr includes this advice for what to do after you find a bug: step back and figure out what error in thinking led to the bug.

An important part of learning from a mistake is diagnosing why you made it, and then taking steps wherever possible to make it difficult or impossible to make the same mistake again. This may involve changing your development process, creating a new tool, modifying an existing tool, learning a new technique, or some other act. But it requires an act. Learning rarely just happens.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 07, 2013 3:31 PM

A Programming Koan

Student: "I didn't have time to write 150 lines of code for the homework."

Master: "That's fine. It requires only 50."

Student: "Which 50?"

I have lived this story several times recently, as the homework in my Programming Languages has become more challenging. A few students do not complete the assignment because they do not spend enough time on the course, either in practice or performance. But most students do spend enough time, both in practice and on the assignment. Indeed, they spend much more time on the assignment than I intend.

When I see their code, I know why. They have written long solutions: code with unnecessary cases, unnecessary special cases, and unnecessary helper functions. And duplication -- lots and lots of duplication. They run out of time to write the ten lines they need to solve the last problem on the set because they spent all their time writing thirty lines on each of the preceding problems, where ten would have done quite nicely.

Don't let anyone fool you. Students are creative. The trick is o help them harness their creativity for good. The opposite of good here is not evil, but bad code -- and too much code.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 28, 2013 2:52 PM

The Power of a Good Abstract

Someone tweeted a link to Philip Greenspun's M.S. thesis yesterday. This is how you grab your reader's attention:

A revolution in earthmoving, a $100 billion industry, can be achieved with three components: the GPS location system, sensors and computers in earthmoving vehicles, and SITE CONTROLLER, a central computer system that maintains design data and directs operations. The first two components are widely available; I built SITE CONTROLLER to complete the triangle and describe it here.

Now I have to read the rest of the thesis.

You could do worse than use Greenspun's first two sentences as a template for your next abstract:

A revolution in <major industry or research area> can be achieved with <n> components: <component-1>, <component-2>, ... and <component-n>. The first <n-1> components are widely available. I built <program name> to meet the final need and describe it here.

I am adding this template to my toolbox of writing patterns, alongside Kent Beck's four-sentence abstract (scroll down to Kent's name), which generalizes the idea of one startling sentence that arrests the reader. I also like good advice on how to write concise, incisive thesis statements, such as that in Matt Might's Advice for PhD Thesis Proposals and Olin Shivers's classic Dissertation Advice.

As with any template or pattern, overuse can turn a good idea into a cliché. If readers repeatedly see the same cookie-cutter format, it begins to look stale and will cause the reader to lose interest. So play with variations on the essential theme: I have solved an important problem. This is my solution.

If you don't have a great abstract, try again. Think hard about your own work. Why is this problem important? What is the big win from my solution? That's a key piece of advice in Might's advice for graduate students: state clearly and unambiguously what you intend to achieve.

Indeed, approaching your research in a "test-driven" way makes a lot of sense. Before embarking on a project, try to write the startling abstract that will open the paper or dissertation you write when you have succeeded. If you can't identify the problem as truly important, then why start at all? Maybe you should pick something more valuable to work on, something that matters enough you can write a startling abstract for the esult. That's a key piece of advice shared by Richard Hamming in his You and Your Research.

And whatever you do, don't oversell a minor problem or a weak solution with an abstract that promises too much. Readers will be disappointed at best and angry at worst. If you oversell even a little bit too many times, you will become like the boy who cried wolf. No one will believe your startling claim even when it's on the mark.

Greenspun's startling abstract ends as strongly as it begins. Of course, it helps if you can close with a legitimate appeal to ameliorating poverty around the world:

This area is exciting because so much of the infrastructure is in place. A small effort by computer scientists could cut the cost of earthmoving in half, enabling poor countries to build roads and rich countries to clean up hazardous waste.

I'm not sure adding another automating refactoring to Eclipse or creating another database library can quite rise to the level of empowering the world's poor. But then, you may have a different audience.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

February 27, 2013 11:52 AM

Programming, Writing, and Clear Thinking

This Fortune Management article describes a technique Jeff Bezos uses in meetings of his executive team: everyone begins by "quietly absorbing ... six-page printed memos in total silence for as long as 30 minutes".

There is a good reason, Bezos knows, for an emphasis on reading and the written word:

There is no way to write a six-page, narratively structured memo and not have clear thinking.

This is certainly true for programming, that unique form of writing that drives the digital world. To write a well-structured, six-page computer program to perform a task, you have to be able to think clearly about your topic.

Alas, the converse is not true, at least not without learning some specific skills and practicing a lot. But then again, that makes it just like writing narratives.

My Programming Languages students this semester are learning that, for functional programming in Scheme, the size limit is somewhat south of six pages. More along the lines of six lines.

That's a good thing if your goal is clear thinking. Work hard, clarify your thoughts, and produce a small function that moves you closer to your goal. It's a bad thing if your goal is to get done quickly.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 20, 2013 3:32 PM

A Lesson about Learning from a Self-Aware Teacher

In a reminiscence on his experience as a student, John Cook writes:

I enjoyed learning about it as a student and I enjoyed teaching it later. (Or more accurately, I enjoyed being exposed to it as a student and really learning it later when I had to teach it.)

It is a commonplace for anyone who has taught that we learn a lot more about any topic when we teach it -- even a topic in which we are acknowledged experts. Between organizing material for instruction and interacting with people as they learn, we learn an awful lot ourselves.

There is, however, a hidden gem in John's comment that is not so commonly talked about: "I enjoyed being exposed to it as a student...".

As teachers, we do well to remember that our students aren't really learning something when they take our courses, especially when the course is their first encounter with the material. We are merely exposing them to the topic, giving them the lay of the land and a little vocabulary. The course is an opportunity to engage with the material, perhaps again. If we don't keep this in mind, we may deceive ourselves with unrealistic expectations about what students will know and be able to do at the end of the course.

A second advantage of remembering this truth is that we may be on guard to create opportunities to deepen their exposure, through homework and projects and readings that pull them in. It is through our students' own efforts that learning takes place, and that our courses succeed.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 18, 2013 12:59 PM

Code Duplication as a Hint to Think Differently

Last week, one of my Programming Languages students sent me a note saying that his homework solution worked correctly but that he was bothered by some duplicated code.

I was so happy.

Any student who has me for class for very long hears a lot about the dangers of duplication for maintaining code, and also that duplication is often a sign of poor design. Whenever I teach OOP or functional programming, we learn ways to design code that satisfy the DRY principle and ways to eliminate it via refactoring when it does sneak in.

I sent the student an answer, along with hearty congratulations for recognizing the duplication and wanting to eliminate it. My advice

When I sat down to blog the solution, I had a sense of deja vu... Hadn't I written this up before? Indeed I had, a couple of years ago: Increasing Duplication to Eliminate Duplication. Even in the small world of my own teaching, it seems there is nothing new under the sun.

Still, there was a slightly different feel to the way I talked about this in class later that day. The question had come earlier in the semester this time, so the code involved was even simpler. Instead of processing a vector or a nested list of symbols, we were processing with a flat list of symbols. And, instead of applying an arbitrary test to the list items, we were simply counting occurrences of a particular symbol, s.

The duplication occurred in the recursive case, where the procedure handles a pair:

    (if (eq? s (car los))
        (+ 1 (count s (cdr los)))      ; <---
        (count s (cdr los)))           ; <---

Then we make the two sub-cases more parallel:

    (if (eq? s (car los))
        (+ 1 (count s (cdr los)))      ; <---
        (+ 0 (count s (cdr los))))     ; <---

And then use distributivity to push the choice down a level:

    (+ (if (eq? s (car los)) 1 0)
       (count s (cdr los)))            ; <--- just once!

This time, I made a point of showing the students that not only does this solution eliminate the duplication, it more closely follows the command to follow the shape of the data:

When defining a program to process an inductively-defined data type, the structure of the program should follow the structure of the data.

This guideline helps many programmers begin to write recursive programs in a functional style, rather than an imperative style.

Note that in the first code snippet above, the if expression is choosing among two different solutions, depending on whether we see the symbol s in the first part of the pair or not. That's imperative thinking.

But look at the list-of-symbols data type:

    <list-of-symbols> ::= ()
                        | (<symbol> . <list-of-symbols>)

How many occurrences of s are in a pair? Obviously, the number of s's found in the car of the list plus the number of s's found in the cdr of the list. If we design our solution to match the code to the data type, then the addition operation should be at the top to begin:

    (+ ; number of s's found in the car
       ; number of s's found in the cdr )

If we define the answer for the problem in terms of the data type, we never create the duplication-by-if in the first place. We think about solving the subproblems for the car and the cdr, fill in the blanks, and arrive immediately at the refactored code snippet above.

I have been trying to help my students begin to "think functionally" sooner this semester. There is a lot or room for improvement yet in my approach. I'm glad this student asked his question so early in the semester, as it gave me another chance to model "follow the data" thinking. In any case, his thinking was on the right track.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

February 17, 2013 12:16 PM

The Disruption of Education: B.F. Skinner, MOOCs, and SkillShare

Here are three articles, all different, but with a connection to the future of education.

•   Matthew Howell, Teaching Programming

Howell is a software developer who decided to start teaching programming on the side. He offers an on-line course through SkillShare that introduces non-programmers to the basic concepts of computer programming, illustrated using Javascript running in a browser. This article describes some of his reasons for teaching the course and shares a few things he has learned. One was:

What is the ideal class size? Over the year, I've taught classes that ranged in size from a single person to as many as ten. Through that experience, I've settled on five as my ideal.

Anyone who has taught intro programming in a high school or university is probably thinking, um, yeah, that would be great! I once taught an intermediate programming section with fifty or so people, though most of my programming courses have ranged from fifteen to thirty-five students. All other things being equal, smaller is better. Helping people learn to write and make things almost usually benefits from one-on-one time and time for small groups to critique design together.

Class size is, of course, one of the key problems we face in education these days, both K-12 and university. For a lot of teaching, n = 5 is just about perfect. For upper-division project courses, I prefer four groups of four students, for a total of sixteen. But even at that size, the costs incurred by a university offering sections of are rising a lot faster than its revenues.

With MOOCs all the rage, Howell is teaching at the other end of spectrum. I expect the future of teaching to see a lot of activity at both scales. Those of us teaching in the middle face bleaker prospects.

•   Mike Caulfield, B. F. Skinner on Teaching Machines (1954)

Caulfield links to this video of B.F. Skinner describing a study on the optimal conditions for self-instruction using "teaching machines" in 1954. Caulfield points out that, while these days people like to look down on Skinner's behaviorist view of learning, he understood education better than many of his critics, and that others are unwittingly re-inventing many of his ideas.

For example:

[Skinner] understands that it is not the *machine* that teaches, but the person that writes the teaching program. And he is better informed than almost the entire current educational press pool in that he states clearly that a "teaching machine" is really just a new kind of textbook. It's what a textbook looks like in an age where we write programs instead of paragraphs.

That's a great crystallizing line by Caulfield: A "teaching machine" is what a textbook looks like in an age where we write programs instead of paragraphs.

Caulfield reminds us that Skinner said these things in 1954 and cautions us to stop asking "Why will this work?" about on-line education. That question presupposes that it will. Instead, he suggests we ask ourselves, "Why will this work this time around?" What has changed since 1954, or even 1994, that makes it possible this time?

This is a rightly skeptical stance. But it is wise to be asking the question, rather than presupposing -- as so many educators these days do -- that this is just another recursion of the "technology revolution" that never quite seems to revolutionize education after all.

•   Clayton Christensen in Why Apple, Tesla, VCs, academia may die

Christensen didn't write this piece, but reporter Cromwell Schubarth quotes him heavily throughout on how disruption may be coming to several companies and industries of interest to his Silicon Valley readership.

First, Christensen reminds young entrepreneurs that disruption usually comes from below, not from above:

If a newcomer thinks it can win by competing at the high end, "the incumbents will always kill you".

If they come in at the bottom of the market and offer something that at first is not as good, the legacy companies won't feel threatened until too late, after the newcomers have gained a foothold in the market.

We see this happening in higher education now. Yet most of my colleagues here on the faculty and in administration are taking the position that leaves legacy institutions most vulnerable to overthrow from below. "Coursera [or whoever] can't possibly do what we do", they say. "Let's keep doing what we do best, only better." That will work, until it doesn't.

Says Christensen:

But now online learning brings to higher education this technological core, and people who are very complacent are in deep trouble. The fact that everybody was trying to move upmarket and make their university better and better and better drove prices of education up to where they are today.

We all want to get better. It's a natural desire. My university understands that its so-called core competency lies in the niche between the research university and the liberal arts college, so we want to optimize in that space. As we seek to improve, we aspire to be, in our own way, like the best schools in their niches. As Christensen pointed out in The Innovator's Dilemma, this is precisely the trend that kills an institution when it meets a disruptive technoology.

Later in the article, Christensen talks about how many schools are getting involved in online learning, sometimes investing significant resources, but almost always in service of the existing business model. Yet other business models are being born, models that newcomers are willing -- and sometimes forced -- to adopt.

One or more of these new models may be capable of toppling even the most successful institutions. Christensen describes one such candidate, a just-in-time education model in which students learn something, go off to use it, and then come back only when they need to learn what they need to know in order to take their next steps.

This sort of "learn and use", on-the-job learning, whether online or in person, is a very different way of doing things from school as we know it. It id not especially compatible with the way most universities are organized to educate people. It is, however, plenty compatible with on-line delivery and thus offers newcomers to the market the pebble they may use to bring down the university.

~~~~

The massively open on-line course is one form the newcomers are taking. The smaller, more intimate offering enabled by the likes of SkillShare is another. It may well be impossible for legacy institutions caught in the middle to fend off challenges from both directions.

As Caulfield suggests, though, we should be skeptical. We have seen claims about technology upending schools before. But we should adopt the healthy skepticism of the scientist, not the reactionary skepticism of the complacent or the scared. The technological playing field has changed. What didn't work in 1954 or 1974 or 1994 may well work this time.

Will it? Christensen thinks so:

Fifteen years from now more than half of the universities will be in bankruptcy, including the state schools. In the end, I am excited to see that happen.

I fear that universities like mine are at the greatest risk of disruption, should the wave that Christensen predicts come. I don't know many university faculty are excited to see it happen. I just hope they aren't too surprised if it does.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 12, 2013 2:53 PM

Student Wisdom on Monad Tutorials

After class today, a few of us were discussing the market for functional programmers. Talk turned to Clojure and Scala. A student who claims to understand monads said:

To understand monad tutorials, you really have to understand monads first.

Priceless. The topic of today's class was mutual recursion. I think we are missing a base case here.

I don't know whether this is a problem with monads, a problem with the writers of monad tutorials, or a problem with the rest of us. If it is true, then it seems a lot of people are unclear on the purpose of a tutorial.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

February 10, 2013 11:18 AM

Meaningful Interaction and On-Line Education

After tweeting a jewel from Clay Shirky's latest article, I read the counterpoint article by Maria Bustillos, Venture Capital's Massive, Terrible Idea For The Future Of College, and an earlier piece by Darryl Tippens in the Chronicle of Higher Education, Technology Has Its Place: Behind a Caring Teacher. I share these writers' love of education and sympathetic to many of their concerns about the future of the traditional university. In the end, though, I think that economic and technological forces will unbundle university education whether we like it or not, and our best response is not simply to lament the loss. It is to figure out how best to preserve the values of education in the future, not to mention its value to future citizens.

While reading Bustillos and Tippens, I thought about how the lab sciences (such as physics, biology, and chemistry) are in many ways at a bigger disadvantage in an on-line world than disciplines that traffic primarily in ideas, such as the humanities. Lab exercises are essential to learning science, but they require equipment, consumable supplies, and dedicated space that are not typically available to us in our homes. Some experiments are dangerous enough that we don't want students trying them without the supervision of trained personnel.

Humanities courses have it relatively easier. Face-to-face conversation is, of course, a huge part of the educational experience there. But the sharing of ideas, and the negotiation of shared understanding, can be conducted in a number of ways that are amenable to on-line communication. Reading and writing have long played a central role in the growth of knowledge, alongside teaching in a classroom and conversation with like-minded individuals in close personal settings.

I soon realized something. Bustillos and Tippens, like so many others, seem to assume that collaboration and meaningful interaction cannot happen on-line.

Bustillos puts it harshly, and inaccurately:

MOOCs are an essentially authoritarian structure; a one-way process in which the student is a passive recipient required to do nothing except "learn."

Tippens expresses the sentiment in a more uplifting way, quoting Andrew Delbanco:

Learning is a collaborative rather than a solitary process.

On-line education does not have to be passive any more than a classroom has to be passive. Nor must it be solitary; being alone a lot of the time does not always mean doing alone.

A few faculty in my department have begun to create on-line versions of their courses. In these initial efforts, interaction among students and teacher have been paramount. Chat rooms, e-mail, wikis, and discussion boards all provide avenues for students to interact with the instructor and among themselves. We are still working at a small scale and primarily with students on-campus, so we have had the safety valve of face-to-face office hours available. Yet students often prefer to interact off hours, after work or over the weekend, and so the on-line channels prove to be most popular.

Those in the software world have seen how collaboration can flourish on-line. A lot of the code that makes our world go is developed and maintained by large, distributed communities whose only opportunities to collaborate are on-line. These developers may be solitary in the sense that they work in a different room from their compatriots, but they are not solitary in the sense of being lonesome, desolate, or secluded. They interact as a matter of course. Dave Humphrey has been using this model and its supporting technology as part of his teaching at Seneca College for a few years now. It's exciting.

My own experience with on-line interaction goes back to the 1980s, when I went to graduate school and discovered Usenet. Over the next few years, I made many good friends, some of whom I see more often in-person than I see most friends from my high school and college years. Some, I have never met in person, yet I consider them good friends. Usenet enabled me to interact with people on matters of purely personal interest, such as basketball and chess, but also on matters of academic value.

In particular, I was able to discuss AI, my area of study, with researchers from around the world. I learned a lot from them, and those forums gave me a chance to sharpen my ability to express ideas. The time scale was between the immediate conversation of the classroom and the glacial exchange of conference and journal papers. These on-line conversations gave me time to reflect before responding, while still receiving feedback in a timely fashion. They were invaluable.

Young people today grow up in a world of on-line interaction. Most of their interactions on-line are not deep, to be sure, but some are. And more could be, if someone could show them the way. That's the educator's job. The key is that these youth know that on-line technology allows them to be active, to create, and to learn. Telling them that on-line learning must be passive or solitary will fall on deaf ears.

Over twenty years of teaching university courses has taught me how important face-to-face interaction with students can be. How well experiments in on-line education address the need for interpersonal communication will go a long way to determining whether they succeed as education. But assuming that collaboration and meaningful interaction cannot happen on-line is surely a losing proposition.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 03, 2013 11:10 AM

Faulkner Teaches How to Study

novelist William Faulkner, dressed for work

From this Paris Review interview with novelist William Faulkner:

INTERVIEWER

Some people say they can't understand your writing, even after they read it two or three times. What approach would you suggest for them?

FAULKNER

Read it four times.

The first three times through the book are sunk cost. At this moment, you don't understand. What should you do? Read it again.

I'm not suggesting you keep doing the same failing things over and over. (You know what Einstein said about insanity.) If you read the full interview, you'll see that Faulkner isn't suggesting that, either. We're suggesting you get back to work.

Studying computer science is different from reading literature. We can approach our study perhaps more analytically than the novel reader. And we can write code. As an instructor, I try to have a stable of ideas that students can try when they are having trouble grasping a new concept or understanding a reading, such as:

  • Assemble a list of specific questions to ask your prof.
  • Talk to a buddy who seems to understand what you don't.
  • Type the code from the paper in character-by-character, thinking about it as you do.
  • Draw a picture.
  • Try to explain the parts you do understand to another student.
  • Focus on one paragraph, and work backward from there to the ideas it presumes you already know.
  • Write your own program.

One thing that doesn't work very well is being passive. Often, students come to my office and say, "I don't get it." They don't bring much to the session. But the best learning is not passive; it's active. Do something. Something new, or just more.

Faulkner is quite matter-of-fact about creating and reading literature. If it isn't right, work to make it better. Technique? Method? Sure, whatever you need. Just do the work.

This may seem like silly advice. Aren't we all working hard enough already? Not all of us, and not all the time. I sometimes find that when I'm struggling most, I've stopped working hard. I get used to understanding things quickly, and then suddenly I don't. Time to read it again.

I empathize with many of my students. College is a shock to them. Things came easily in high school, and suddenly they don't. These students mean well but seem genuinely confused about what they should do next. "Why don't I understand this already?"

Sometimes our impatience is born from such experience. But as Bill Evans reminds us, some problems are too big to conquer immediately. He suggests that we accept this up front and enjoy the whole trip. That's good advice.

Faulkner shrugs his shoulders and tells us to get back to work.

~~~~

PHOTO. William Faulkner, dressed for work. Source: The Centered Librarian.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 24, 2013 4:32 PM

Real-Life Examples of Quotation in Scheme

The new semester is fully underway, and I'm already enjoying Programming Languages. My Tuesday session this week felt like a hodgepodge of topics, including Scheme definitions and conditionals, and didn't inspire my students much. Today's session on pairs and lists seemed to go much more smoothly, at least from my side of the classroom.

One thing that has been different the first two weeks this time around has been several questions about the quote character in Scheme, which is shorthand for the special form quote.

The purpose of the quote is to tell the interpreter to take its argument literally. When the argument is a list, say, '(* 2 3), quotation prevents the interpreter from evaluating the list as a Scheme procedure call. When the argument is a symbol, say, 'a, the quote lets the interpreter know not to treat the a as an identifier, looking up the value bound to that name in the current environment. Instead, it is treated as the literal symbol a. Most of our students have not yet worked in languages where symbols are first-class data values, so this idea takes some getting used to.

In the course of talking about quotation with them, I decided to relate this idea to an example of quotation from real life. The first thing that came to mind at that instant was the distinction between these two sentences:

Lincoln was disappointing.
"Lincoln" was disappointing.

In the former, Lincoln is a name to be evaluated. Depending on the context, it could refer to the 16th president of the United States, the capital of Nebraska, or some other object in the world. (The sentence doesn't have to be true, of course!)

In the latter, quoting Lincoln makes it a title. I intended for this "literal" reference to the word Lincoln to evoke the current feature film of that name.

Almost immediately I began to second-guess my example. The quoted Lincoln is still a name for something -- a film, or a boo, or some such -- and so still needs to be "dereferenced" to retrieve the object signified. It's just that we treat titles differently than other names.

So it's close to what I wanted to convey, but it could mislead students in a dangerous way.

The canonical real-world example of quotation is to quote a word so that we treat the utterance as the word itself. Consider:

Creativity is overused.
"Creativity" is overused.

In the former, creativity is a name to be evaluated. It signifies an abstract concept, a bundle of ideas revolving around creation, originality, art, and ingenuity. We might say creativity is overused in a context where people should be following the rules but are instead blazing their own trails.

In the latter, the quoted creativity signifies the word itself, taken literally. We might say "creativity" is overused to suggest an author improve a piece of writing by choosing a near-synonym such as "cleverness" or "originality", or by rephrasing a sentence so that the abstract concept is recast as the verb in an active statement.

This example stays more faithful to the use of quote in Scheme, where an expression is taken literally, with no evaluation of of any kind needed.

I like giving examples of how programming concepts exist in other parts of our lives and world. Even when they are not perfect matches, they can sometimes help a student's mind click on the idea as it works in a programming language or style.

I like it better when I use better examples!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 18, 2013 2:42 PM

Alive with Infinite Possibilities

the PARC 5-key Chord Keyboard, courtesy the Buxton collection

Engelbart's Violin tells the interesting story of Douglas Engelbart's chorded keyboard, or "chorder", an input device intended as a supplement to the traditional keyboard. Engelbart was part of a generation that saw computing as a universe of unlimited possibilities, and more than many others he showed us glimpses of what it could be.

I grew up in an age when an unadorned BASIC interpreter was standard equipment on any computer, and with so little software available to us, we all wrote programs to make the machine do our bidding. In a narrower way, we felt the sense of unlimited possibilities that drove Engelbart, Sutherland, and the generations that came before us. If only we all had vision as deep.

Unfortunately, not many teenagers get to have that kind of experience anymore. BASIC became VB.Net, a corporate language for a corporate world. The good news is that languages like Python and even JavaScript make programming accessible to more people again, but the ethos of anyone learning to program on his or her own at home seems to have died off.

Engelbart's Violin uses strong language to judge the current state of computing, with some of its strongest lamenting the "cruel discrepancy" between the experience of a creative child learning to program and the world of professional programming:

When you are a teenager, alone with a (programmable) computer, the universe is alive with infinite possibilities. You are a god. Master of all you survey. Then you go to school, major in "Computer Science", graduate -- and off to the salt mines with you, where you will stitch silk purses out of sow's ears in some braindead language, building on the braindead systems created by your predecessors, for the rest of your working life. There will be little room for serious, deep creativity. You will be constrained by the will of your master (whether the proverbial "pointy-haired boss", or lemming-hordes of fickle startup customers) and by the limitations of the many poorly-designed systems you will use once you no longer have an unconstrained choice of task and medium.

Ouch. We who teach CS at the university find ourselves trapped between the needs of a world that employs most of our graduates and the beauty that computing offers. Alas, what Alan Kay said about Engelbart applies more broadly: "Engelbart, for better or for worse, was trying to make a violin.... [M]ost people don't want to learn the violin." I'm heartened to see so many people, including my own colleagues, working so hard to bring the ethos and joy of programming back to children, using Scratch, media computation, and web programming.

This week, I began a journey with thirty or so undergraduate CS students, who over the next four months will learn Scheme and -- I hope -- get a glimpse of the infinite possibilities that extend beyond their first jobs, or even their last. At the very least, I hope I don't shut any more doors on them.

~~~~

PHOTO. The PARC 5-key Chord Keyboard, from the Buxton collection at Microsoft Research.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 04, 2013 5:54 PM

Impatience, A Barrier At All Scales

One of the barriers to learning that Dan Heisman sees in chessplayers is finding the patience to play slower. When playing any game, a person has to play slow enough to discern the consequences of moves. The chess world is complicated by the fact that games in tournament settings are timed, with a limit on the time a player can take for a certain number of moves. Over-the-board chess is played at a varieties of time limit, historically ranging from five minutes for the entire game (lightning chess) to forty moves in two and a half hours (world championship matches). Different time controls lead to different kinds of game.

Improvement at "serious" chess -- slow chess -- requires playing slower, at longer time controls. You have to practice thinking about moves and plans at a deeper level. Just as we have to train our bodies to run long distances, we have to train our chess brains to work at longer time controls.

This requires both stamina and patience. Sometimes, our brains are capable of thinking for long periods about a position, but our psyche wants to push, move faster. The barrier here is impatience "in the small", at scale of an individual game.

We see the same thing in novice programmers, who think they should be able to write a complicated program as quickly as they read a web comic or watch a YouTube video. Read the problem description; write the code. One of the important parts of most introductory programming courses is helping students learn patience at the level of writing a single program.

Another important kind of patience plays a role in the large, at learning scale. Some people want to get good fast: learn some syntax, write a few programs, and -- bam! -- be an expert. Peter Norvig has written the canonical treatment of long-term patience in learning to program.

jazz pianist Bill Evans, with Miles Davis

Of course, some talented people do get good fast, or at least they seem to become good faster than we do. That can be frustrating, especially when we are struggling. But that fact is, most of us have to take time to get good at anything.

Even the most accomplished artists know that. I'm reminded of this comment from Bill Evans, one of the greatest jazz pianists of the second half of the 20th century, in The Universal Mind of Bill Evans:

"Most people just don't realize the immensity of the problem," Evans says, "and either because they can't conquer immediately they think they haven't got the ability, or they're so impatient to conquer it that they never do see it through. But if you do understand the problem, then I think you can enjoy your whole trip through."

Learning to write software well is an immense task. The most successful programmers recognize this early and enjoy the trip. This kind of patience, over the long term, makes it a lot easier to take on the barriers that inevitably appear along the way.

~~~~

PHOTO. Jazz pianist Bill Evans with Miles Davis, courtesy Photos of musicians at Tumblr.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 03, 2013 1:36 PM

Breaking Down Barriers to Learning

Or: What a CS Student -- or Teacher -- Can Learn from a Chess Coach

All teachers, whatever their discipline, encounter many of the same challenges. Chess coach Dan Heisman once wrote an instructive article called Breaking Down Barriers in which he discussed a split that he saw among his students:

  • players who are frozen by barriers to improvement, or even actively erect barriers to improvement
  • players who work to eliminate barriers, whatever that might require

I once wrote a short entry, Some People Get Stuff Done, about a similar phenomenon I have noticed over many years of teaching: some students find a way to get things done: to write the program, to solve the problem, to pass the course. These are usually the same students who find -- and make -- ways to get better at their craft. When they encounter barriers to their learning, they find a way to go over, around, or through them.

These are also the students who tend to succeed after they graduate and leave the shelter of a school system that often does more to help them succeed than they realize. The business world isn't always so forgiving.

The starting point for students who break down barriers to learning is a particular mindset, an attitude that they can learn. This is the mindset that enables these students to persevere when they encounter a roadblock. In some students, this attitude seems to encourage them to persevere.

A positive attitude isn't sufficient on its own. Believing that you can defeat a better player over the chessboard doesn't mean you will. You still have to do it. But, as Heisman says, a defeatist attitude can be a self-fulfilling prophecy during a game.

The same is true for learning computer science, or any other academic discipline. Positive attitude isn't the endpoint. It is the starting line. It is what allows you to continue working when things get difficult. The hard work is what gets you over, around, or through the barriers.

The good news is that hard work is more important than innate ability. Dick Gabriel is fond of saying that talent doesn't determine how good you get; it determines how fast you get good. How hard you work determines how good you get.

So, the right mindset is useful, and working hard is essential. But even these are not enough. Doing the right kind of work is essential. Students who don't realize this can demotivate themselves quickly. They throw themselves into their work, spend a lot of time and energy doing counterproductive things... and don't get better. They figure they "don't have what it takes" and move on to something else.

As a teacher, few things make me sadder than to see a student who wants to learn and is willing to work burn him- or herself out by spinning wheels with unhelpful, unproductive, uninformed study. When these students are in my class, I wonder, "Why didn't they come get help?" Or, if they did, "Why didn't they follow my advice?"

Some practices support learning better than others. There is a skill to learning, a discipline and a way of working that succeeds better than others. Often, the students do best in my courses are not the most gifted students but rather the ones who have learned how to attend class, how to read, and how to study. Many of these students figured out these skills early on in their school careers, or got lucky. It doesn't matter which. In any case, these skills are multipliers that enable these students to accelerate in every course they take. Knowledge and skill accumulate, and pretty soon these students look like something apart from their struggling colleagues.

But they aren't. They are doing things any student can do. "But I'll never catch up to those students, no matter how hard I work." Maybe, and maybe not. That doesn't matter, either. This is about your learning. You have to start from where you are, and go forward from there.

A good teacher or coach really can help students, by helping them get over incidental barriers and helping them learn how to get over the essential barriers in a productive way. One of the most important things a teacher can do is to provide feedback loops of both short and long duration, and then help students identify mistakes and minimize their repetition. This is obviously invaluable in playing chess, where pattern recognition and speed are fundamental capabilities. But pattern recognition and speed are fundamental capabilities in any kind of learning.

Some people these days like to downplay the importance of a human teacher or coach in learning to develop software. "We have the compiler, which gives us all the feedback we need any time of day." Yes, indeed. We have access to tools that our predecessor could only have dreamed of, and these tools empower us as learners. But are they always enough?

Chess players have something akin to our compiler: the chess computer. When I got my first chess computer in high school, I was ecstatic. I could play any time, anywhere, against a player that could be set to any level from beatable novice to untouchable master. I played a lot of games. A lot. And I learned a lot, too, as I did any time I played a lot of games and thought about them. But there were times when I didn't understand why something worked, or didn't. In those cases, I consulted the writings of a good teacher, or asked a human coach for some help. They helped me get over the obstacle faster than I could on my own.

That's another role that our teachers can play: they can help us to understand why.

Some Barriers

In his article, Heisman talks about some of the common barriers that chess students face and how to overcome them. Some seem specific to the game or competitive activities, but remarkably all of them apply equally well to students of computing or software development.

The major difference is that the barriers outlined are ones encountered by people who do most of their learning on their own, not in the classroom. Chess players don't usually go to school full time. If they have a teacher at all, it's more like piano lessons than a university program. The student meets with the teacher once a week (or less) and then studies, practices, and plays a lot in between sessions.

A lot of people these days believe that learning to create software is or should be more like this model than the university model, too. Movements like Software Apprenticeship and Codecademy are all about enabling individuals to take control of their own learning. But even in a university setting, learning works best when students study, practice, and program a lot in between class sessions or meetings with the professor.

Heisman discusses ten barriers. See if you can map them onto learning CS, or how to write software:

  1. Finding Good Practice and Feedback at a Club
  2. Worrying About Your Rating
  3. Competition Too Good at Tournaments
  4. Some Players Unfriendly on the Internet
  5. Players Only Want to Play Fast on the Internet
  6. Homework is Not Fun
  7. Finding Patience to Play Slower
  8. Don't Have the Time to Study or Play Enough
  9. Not That Talented
  10. Find Competitive Chess Daunting

It's not that hard, is it? ("Some players unfriendly on the internet"? We invented that one!)

A few of these stand out in my experience as a teacher and student. Consider #2. Chess players have ratings that reflect the level of their play. When you perform better than expected at a tournament, your rating goes up. When you perform poorer than expected, your rating goes down.

In the university, this corresponds to grades and grade-point average. Some students are paralyzed by the idea that their GPA might go down. So they take easier courses, or seek out professors who don't grade as hard as others. The negative outcome of this approach is that the courses don't challenge the student enough, and it is the challenge that pushes them to learn. Protecting your GPA in this way can cause you to learn less.

Ironically, this barrier most often affects students who have historically done well in school. They are the ones used to having high GPAs, and too often they have invested way too much of their self-worth in their grades. Students who expect to do well and to have "ratings" that confirm their status face an extra psychological barrier when the material becomes more challenging. This is one of those situations in which another person -- even a teacher! -- can be of enormous value by providing emotional support.

Homework isn't fun is a universal barrier to learning. There are many strategies for trying to get through the drudgery of work, and a good teacher tries them all. But, in the end, it is work. You just have to get over it. The good news is that there is a self-reinforcing cycle between doing the work and enjoying the work. Hugh MacLeod captures it well:

I love what I do, because I'm good at what I do, because...

Hard work is also scary. We might fail. But if we keep working, eventually we can succeed. What can keep us from doing the work?

Time is a limiting factor. Students often tell me, "I don't have the time..." to study, or write more code, or come in to get help. This can be a real challenge for students who have to work thirty hours a week to pay the bills, or who have families to care for.

I ask my students to be honest with themselves. Do you not have the time, or do you not make the time? Sometimes it's bad habit, frittering away time at video games or socializing. Sometimes it's bad habit that uses time inefficiently or ineffectively. These are habits you can change -- if you want to.

A student came to me just after we took our midterm exam last semester. He had done poorly and was worried about his grade. He asked if there was any chance left that he could succeed in the course. I said "yes", contingent on the answer to this question: What are you willing to change in your life to make success possible? Clearly, what he was doing wasn't enough. Either he was not devoting enough time to the course, or he was putting in enough time but not doing the right things. Was he willing to change what needed to be changed, if only for the eight weeks left in the course? He was honest with himself and dropped the course. A couple of other students made changes to how they worked on the course and recovered quite nicely.

Ultimately, if you want to succeed at something, you make time to do the work.

Conclusion

Twenty-five years ago, Rolling Stone panned the re-release of rocker John Mellencamp's debut album, Chestnut Street Incident. In the ten years since its original release, Mellencamp had become a star. His work in the intervening years made the quality of his debut look all the worse. It was cocky and klutzy, less than the sum of his musical influences. "Back then," the reviewer wrote, Mellencamp's "ambition definitely exceeded his grasp". The album wasn't very good, and the review said so, harshly.

Mellencamp wouldn't have disagreed. In interviews, he has often talked about how bad a songwriter he was when he first started in the business. On top of that, he faced obstacles similar to those facing many other young artists, including promoters and handlers more interested in music industry fashion and short-term profit than art. They even made him take a stage name, "Johnny Cougar", in the interest of marketability.

But he didn't settle for his initial condition. He continued to learn, honing his songwriting skills and learning to manage his own affairs. Along the way, he learned the sort of humility that many artists confident in their art have. Eventually, his grasp matched his ambition. He succeeded commercially and achieved a small measure of critical acceptance.

I must say that I have always liked Chestnut Street Incident, in part for its raw, desperate ambition. Here was a kid from small-town Indiana who wanted to grow beyond his roots and become something more. The lyrics are not profound, and the emotion is not complicated. I've always admired Mellencamp for his attitude about work and craft, and his willingness to try and to fail.

To learn is to break down barriers. Whether you are learning to play chess or to write software, you will encounter obstacles. Breaking them down is a matter of mindset, effort, and specific skills -- all of which are within reach.

Even if you aren't a chess player, you may well enjoy reading Heisman's article. CS students and teachers alike can benefit from its lessons.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 28, 2012 10:03 AM

Translating Code Gibberish to Human-Speak

Following an old link to Ridiculous Fish's Unix shell fish, I recently stumbled upon the delightful cdecl, a service that translates C declarations, however inscrutable, into plain English (and vice versa). As this introductory post says,

Every C declaration will be as an open book to you! Your coworkers' scruffy beards and suspenders will be nigh useless!

The site even provides permalinks so that you can share translations of your thorniest C casts with friends and family.

These pages are more than three years old, so I'm surely telling you something you already know. How did I just find this?

I don't program in C much these days, so cdecl itself is of use to me only as humorous diversion. But it occurs to me that simple tools like this could be useful in a pedagogical setting. Next semester, my students will be learning Scheme and functional programming style. The language doesn't have much syntax, but it does have all those parentheses. Whatever I say or do, they disorient many of my students for a while. Some them will look at even simple code such as

     (let ((x (square 4))
           (y 7))
       (+ x y))

... and feel lost. We spend time in class learning how to read code, and talk about the semantics of such expressions, which helps. But in a pinch, wouldn't it be nice for a student to hit a button and have that code translated into something more immediately comprehensible? Perhaps:

Let x be the square of 4 and y be 7 in the sum of x and y.

This might be a nice learning tool for students as they struggle with a language that seems to them -- at least early on -- to be gibberish on a par with char (*(*(* const x[3])())[5])(int).

Some Scheme masters might well say, "But the syntax and semantics of a let are straightforward. You don't really need this tool." At one level, this is true. Unfortunately, it ignores the cognitive and psychological challenges that most people face when they learn something that is sufficiently unfamiliar to them.

Actually, I think we can use the straightforwardness of the translation as a vehicle to help students learn more than just how a let expression works. I have a deeper motive.

Learning Scheme and functional programming are only a part of the course. Its main purpose is to help students understand programming languages more generally, and how they are processed by interpreters and compilers.

When we look at the let expression above, we can see that translating it into the English expression is not only straightforward, it is 100% mechanical. If it's a mechanical process, then we can write a program to do it for us! Following a BNF description of the expression's syntax, we can write an interpreter that exposes the semantics of the expression.

In many ways, that is the essence of this course.

At this point, this is only a brainstorm, perhaps fueled by holiday cooking and several days away from the office. I don't know yet how much I will do with this in class next term, but there is some promise here.

Of course, we can imagine using a cdecl-like tool to help beginners learn other languages, too. Perhaps there are elements of writing OO code in Java that confuse students enough to make a simple translator useful. Surely public static void main( String[] args) deserves some special treatment! Ruby is complex enough that it might require dozens of little translators to do it justice. Unfortunately, it might take Matz's inside knowledge to write them.

(The idea of translating inscrutable code into language understandable by humans is not limited to computer code, of course. There is a popular movement, to write laws and other legal code in Plain English. This movement is occasionally championed by legislators -- especially in election years. The U.S. Securities and Exchange Commission has its own Plain English Initiative and Plain English Handbook. At seventy-seven pages, the SEC handbook is roughly the same size as R6RS description of Scheme.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 12, 2012 4:18 PM

Be a Driver, Not a Passenger

Some people say that programming isn't for everyone, just as knowing how to tinker under the hood of one's car isn't for everyone. Some people design and build cars; other people fix them; and the rest of us use them as high-level tools.

Douglas Rushkoff explains why this analogy is wrong:

Programming a computer is not like being the mechanic of an automobile. We're not looking at the difference between a mechanic and a driver, but between a driver and a passenger. If you don't know how to drive the car, you are forever dependent on your driver to take you where you want to go. You're even dependent on that driver to tell you when a place exists.

This is CS Education week, "a highly distributed celebration of the impact of computing and the need for computer science education". As a part of the festivities, Rushkoff was scheduled to address members of Congress and their staffers today about "the value of digital literacy". The passage quoted above is one of ten points he planned to make in his address.

As good as the other nine points are -- and several are very good -- I think the distinction between driver and passenger is the key, the essential idea for folks to understand about computing. If you can't program, you are not a driver; you are a passenger on someone else's trip. They get to decide where you go. You may want to invent a new place entirely, but you don't have the tools of invention. Worse yet, you may not even have the tools you need to imagine the new place. The world is as it is presented to you.

Don't just go along for the ride. Drive.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

December 10, 2012 2:54 PM

Brief Flashes of Understanding, Fully Awake

The programmers and teachers among you surely know this feeling well:

As I drift back to sleep, I can't help thinking that it's a wonderful thing to be right about the world. To weigh the evidence, always incomplete, and correctly intuit the whole, to see the world in a grain of sand, to recognize its beauty, its simplicity, its truth. It's as close as we get to God in this life, and we reside in the glow of such brief flashes of understanding, fully awake, sometimes, for two or three seconds, at peace with our existence. And then we go back to sleep.

Or tackle the next requirement.

(The passage is from Richard Russo's Straight Man, an enjoyable send-up of modern man in an academic life.)


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 30, 2012 3:49 PM

Passing Out the Final Exam on Day One

I recently ran across an old blog posting called Students learn what they need, not what is assigned, in which a Ted Dunning described a different sort of "flipped course" than is usually meant: He gave his students the final exam on Day 1, passed out the raw materials for the course, and told them to "get to work". They decided what they needed to learn, and when, and asked for instruction and guidance on their own schedule.

Dunning was happy with the results and concluded that...

... these students could learn vastly more than was expected of them if they just wanted to.

When students like what they are doing, they can surprise most everyone with what they will do to learn. Doing something cool like building a robot (as Dunning's students did) can be all the motivation some students need.

I'm sometimes surprised by just what catches my students' fancy. A few weeks ago, I asked my sophomore- and junior-level OOP class to build the infrastructure for a Twitter-like app. It engaged them like only graphical apps usually do. They've really dug into the specs to figure out what they mean. Many of them don't use Twitter, which has been good, because it frees them of too many preconceived limitations on where they can take their program.

They are asking good questions, too, about design: Should this object talk to that one? The way I divided up the task led to code that feels fragile; is there a better way? It's so nice not to still be answering Java questions. I suspect that some are still encountering problems at the language level, but they are solving them on their own and spending more time thinking about the program at a higher level.

I made this a multi-part project. They submitted Iteration 1 last weekend, will submit Iteration 2 tomorrow, and will work on Iteration 3 next week. That's a crucial element, I think, in getting students to begin taking their designs more seriously. It matters how hard it easy to change the code, because they have to change it now -- and tomorrow!

The point of Dunning's blog is that students have to discover the need to know something before they are really interesting in learning it. This is especially true if the learning process is difficult or tedious. You can apply this idea to a lot of software development, and even more broadly to CS.

I'm not sure when I'll try the give-the-final-exam-first strategy. My compiler course already sort of works that way, since we assign the term project upfront and then go about learning what we need to build the compiler. But I don't make my students request lectures; I still lay the course out in advance and take only occasional detours.

I think I will go at least that far next semester in my programming languages course, too: show them a language on day one and explain that our goal for the semester is to build an interpreter for it by the end of the semester, along with a few variations that explore the range of possibilities that programming languages offer. That may create a different focus in my mind as I go through the semester. I'm curious to see.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 28, 2012 6:34 PM

Converting Lecture Notes into an Active Website

... in which the author seeks pointers to interactive Scheme materials on-line.

Last summer, I fiddled around a bit with Scribble, a program for writing documentation in (and for) Racket. I considered using it to write the lecture notes and website for my fall OOP course, but for a variety of reasons set it aside.

the icon for Slideshow

In the spring I'll be teaching Programming Languages again, and using Racket with my students. This seems like the perfect time to dive in and use Scribble and Slideshow to create all my course materials. This will create a synergy between what I do in class and how I prep, which will be good for me. Using Racket tools will also set a good example for my students.

After seeing The Racket Way, Matthew Flatt's talk at StrangeLoop, I am inspired to do more than simply use Racket tools to create text and slides and web pages. I'd like to re-immerse myself in a world where everything is a program, or nearly so. This would set an even more important example for my students, and perhaps help them to see more clearly that they don't ever to settle for the programs, the tools, or the languages that people give them. That is the Computer Science way as well as the Racket way.

I've also been inspired recently by the idea of an interactive textbook a lá Miller and Ranum. I have a pretty good set of lecture notes for Programming Languages, but the class website should be more than a 21st-century rendition of a 19th-century presentation. I think that using Scribble and Slideshow are a step in the right direction.

So, a request: I am looking for examples of people using the Racket presentation tools to create web pages that have embedded Scheme REPLs, perhaps even a code stepper of the sort Miller and Ranum use for Python. Any pointers you might have are welcome.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 26, 2012 3:24 PM

Quotes of the Day: Constraints on Programming

Obligation as constraint

Edward Yang has discovered the Three Bears pattern. He calls it "extremist programming".

When learning a new principle, try to apply it everywhere. That way, you'll learn more quickly where it does and doesn't work well, even if your initial intuitions about it are wrong.

Actually, you don't learn in spite of your initial intuitions being wrong. You learn because your initial intuitions were wrong. That's when learning happens best.

(I mention Three Bears every so often, such as Bright Lines in Learning and Doing, and whenever I discuss limiting usage of language features or primitive data values.)

Blindness as constraint

In an interview I linked to in my previous entry, Brian Eno and Ha-Joon Chang talk about the illusion of freedom. Whenever you talk about freedom, as in a "free market" or "free jazz",

... what you really mean is "constrained by rules that we've stopped thinking about".

Free jazz isn't entirely free, because you are constrained by what your muscles can do. Free markets aren't entirely free, because there are limits we simply choose not to talk about. Perhaps we once did talk about them and have chosen not to any more. Perhaps we never talked about them and don't even recognize that they are present.

I can't help but think of computer science faculty who claim we shouldn't be teaching OO programming in the first course, or any other "paradigm"; we should just teach basic programming first. They may be right about not teaching OOP first, but not because their approach is paradigm-free. It isn't.

(I mention constraints as a source of freedom every so often, including the ways in which patterns free students to create and the way creativity needs to be developed.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 17, 2012 12:23 PM

Why a CS Major Might Minor in Anthropology

Paul Klipp wrote a nice piece recently on emic and etic approaches to explaining team behavior. He explains what emic and etic approaches are and then shows how they apply to the consultant's job. For example:

Let's look at an example closer to home for us software folks. You're an agile coach, arriving in a new environment with a mission from management to "make this team more agile". If you, like so many consultants in most every field, favor an etic approach, you will begin by doing a gap analysis between the behaviors and artifacts that you see and those with which you are most familiar. That's useful, and practically inevitable. The next natural step, however, may be less helpful. That is to judge the gaps between what this team is doing and what you consider to be normal as wrong.... By deciding, as a consultant or coach, to now attempt to prepare an emic description of the team's behaviors, you force yourself to set aside your preconceptions and engage in meaningful conversations with the team in order to understand how they see themselves. Now you have two tools in your kit, where you might before have had one, and more tools prepares you for more situations.

When I speak to HS students and their parents, and when I advise freshmen, I suggest that the consider picking up a minor or a second major. I tell them that it almost doesn't matter which other discipline they choose. College is a good time to broaden oneself, to enjoy learning for its own sake. Some minors and second majors may seem more directly relevant to a CS grad's career interests, but you never know what domain or company you will end up working in. You never know when having studied a seemingly unrelated discipline will turn out to be useful.

Many students are surprised when I recommend social sciences such as psychology, sociology, and anthropology as great partners for CS. Their parents are, too. Understanding people, both individually and in groups, is important in any profession, but it is perhaps more important for CS grads than many. We build software -- for people. We teach new languages and techniques -- to people. We contract out our services to organizations -- of people. We introduce new practices and methodologies to organizations -- of people. Ethnography may be a more important to a software consultant's success than any set of business classes.

I had my first experience with this when I was a graduate student working in the area of knowledge-based systems. We built systems that aimed to capture the knowledge of human experts, often teams of experts. We found that they relied a lot on tacit knowledge, both in their individual expertise and in the fabric of their teams. It wasn't until I read some papers from John McDermott's research group at Carnegie Mellon that I realized we were all engaged in ethnographic studies. It would have been so useful to have had some background in anthropology!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 15, 2012 4:04 PM

Teaching Students to Read and Study in a New Way

Mark Guzdial's How students use an electronic book, reports on the research paper "Performance and Use Evaluation of an Electronic Book for Introductory Python Programming" [ pdf ]. In this paper, Alvarado et al. evaluate how students used the interactive textbook How to Think Like a Computer Scientist by Ranum and Miller in an intro CS course. The textbook integrates traditional text with embedded video, "active" examples using an embedded Python interpreter, and empirical examples using a code stepper a lá a debugger.

The researchers were surprised to find how little some students used the book's interactive features:

One possible explanation for the less-than-anticipated use of the unique features may be student study skills. The survey results tend to suggest that students "study" by "reading". Few students mention coding or tracing programs as a way of "studying" computer science.

I am not using an interactive textbook in my course this semester, but I have encountered the implicit connection in many students' minds between studying and reading. It caught me off-guard, too.

After lengthy searching and some thought, I decided to teach my sophomore-level OOP course without a required text. I gave students links to two on-line books they could use as Python references, but neither covers the programming principles and techniques that are at the heart of the course. In lieu of a traditional text, I have been giving my students notes for each session, written up carefully in a style that resembles a textbook, and source code -- lots and lots of source code.

Realizing that this would be an unusual way for students to study for a CS class, at least compared to their first-year courses, I have been pretty consistent in encouraging them to work this way. Daily I suggest that they unpack the code, read it, compile it, and tinker with it. The session notes often include little exercises they can do to test or extend their understanding of a topic we have covered in class. In later sessions, I often refer back to an example or use it as the basis for something new.

I figured that, without a textbook to bog them down, they would use my session notes as a map and spend most of their time in the code spelunking, learning to read and write code, and seeing the ideas we encounter in class alive in the code.

a snapshot of Pousse cells in two dimensions

Like the results reported in the Alvarado paper, my experiences have been mixed, and in many ways not what I expected. Some students read very little, and many of those who do read the lecture notes spend relatively little time playing with the code. They will spend plenty of time on our homework assignments, but little or no time on code for the purposes of studying. My data is anecdotal, based on conversations with the subset of students who visit office hours and e-mail exchanges with students who ask questions late at night. But performance on the midterm exam and some of the programming assignments are consistent with my inference.

OO programs are the literature of this course. Textbooks are like commentaries and (really long) Cliff Notes. If indeed the goal is to get students to read and write code, how should we proceed? I have been imagining an even more extreme approach:

  • no textbook, only a language reference
  • no detailed lecture notes, only cursory summaries of what we did in class
  • code as a reading assignment before each session
  • every day in class, students do tasks related to the assigned reading -- engaging, fun tasks, but tasks they can't or wouldn't want to do without having studied the assigned code

A decade or so ago, I taught a course that mixed topics in user interfaces and professional ethics using a similar approach. It didn't provide magic results, but I did notice that once students got used to the unusual rhythm of the course they generally bought in to the approach. The new element here is the emphasis on code as the primary literature to read and study.

Teaching a course in a way that subverts student expectations and experience creates a new pedagogical need: teaching new study skills and helping students develop new work habits. Alvarado et al. recognize that this applies to using a radically different sort of textbook, too:

Might students have learned more if we encouraged them to use codelens more? We may need to teach students new study skills to take advantage of new learning resources and opportunities.

...

Another interesting step would be to add some meta-instruction. Can we teach students new study skills, to take advantage of the unique resources of the book? New media may demand a change in how students use the media.

I think those of us who teach at the university level underestimate how important meta-level instruction of this sort is to most of students. We tend to assume that students will figure it out on their own. That's a dangerous assumption to make, at least for a discipline that tends to lose too many good students on the way to graduation.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 03, 2012 11:17 AM

When "What" Questions Presuppose "How"

John Cook wrote about times in mathematics when maybe you don't need to do what you were asked to do. As one example, he used remainder from division. In many cases, you don't need to do division, because you can find the answer using a different, often simpler, method.

We see a variation of John's theme in programming, too. Sometimes, a client will ask for a result in a way that presupposes the method that will be used to produce it. For example, "Use a stack to evaluate these nested expressions." We professors do this to students a lot, because they want the students to learn the particular technique specified. But you see subtle versions of this kind of request more often than you might expect outside the classroom.

An important part of learning to design software is learning to tease apart the subtle conflation of interface and implementation in the code we write. Students who learn OO programming after a traditional data structures course usually "get" the idea of data abstraction, yet still approach large problems in ways that let implementations leak out of their abstractions in the form of method names and return values. Kent Beck talked about how this problem afflicts even experienced programmers in his blog entry Naming From the Outside In.

Primitive Obsession is another symptom of conflating what we need with how we produce it. For beginners, it's natural to use base types to implement almost any behavior. Hey, the extreme programming principle You Ain't Gonna Need It encourages even us more experienced developers not to create abstractions too soon, until we know we need them and in what form. The convenience offered by hashes, featured so prominently in the scripting languages that many of us use these days, makes it easy to program for a long time without having to code a collection of any sort.

But learning to model domain objects as objects -- interfaces that do not presuppose implementation -- is one of the powerful stepping stones on the way to writing supple code, extendible and adaptable in the face of reasonable changes in the spec.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 30, 2012 4:22 PM

Mathematical Formulas, The Great Gatsby, and Small Programs

... or: Why Less Code is Better

In Don't Kill Math, Evan Miller defends analytic methods in the sciences against Bret Victor's "visions for computer-assisted creativity and scientific understanding". (You can see some of my reactions to Victor's vision in a piece I wrote about his StrangeLoop talk.)

Miller writes:

For the practicing scientist, the chief virtue of analytic methods can be summed up in a single word: clarity. An equation describing a quantity of interest conveys what is important in determining that quantity and what is not at all important.

He goes on to look at examples such as the universal law of gravitation and shows that a single formula gives even a person with "minimal education in physics" an economical distillation of what matters. The clarity provided by a good analytic solution affords the reader two more crucial benefits: confident understanding and memorable insights.

Poet Peter Turchi describes a related phenomenon in fiction writing, in his essay You and I Know, Order is Everything. A story can pull us forward by omitting details and thus creating in the reader a desire to learn more. Referring to a particularly strategic paragraph, he writes:

That first sentence created a desire to know certain information: What is this significant event? ... We still don't have an answer, but the context for the question is becoming increasingly clear -- so while we're eager to have those initial questions answered, we're content to wait a little longer, because we're getting what seems to be important information. By the third paragraph, ... we think we've got a clue; but by that time the focus of the narrative is no longer the simple fact of what's going on, but the [larger setting of the story]. The story has shifted our attention from a minor mystery to a more significant one. On some level or another nearly every successful story works this way, leading us from one mystery to another, like stepping stones across a river.

In a good story, eventually...

... we recognize that the narrator was telling us more than we could understand, withholding information but also drawing our attention to the very thing we should be looking at.

In two very different contexts, we see the same forces at play. The quality of a description follows from the balance it strikes between what is in the description and what is left out.

To me, this is another example of how a liberal education can benefit students majoring in both the sciences and the humanities [ 1 | 2 ]. We can learn about many common themes and patterns of life from both traditions. Neither is primary. A student can encounter the idea first in the sciences, or first in the humanities, whichever interests the student more. But apprehending a beautiful pattern in multiple domains of discourse can reinforce the idea and make it more salient in the preferred domain. This also broadens our imaginations, allowing us to see more patterns and more subtlety in the patterns we already know.

So: a good description, a good story, depends in some part on the clarity attendant in how it conveys what is important and what is not important. What are the implications of this pattern for programming? A computer program is, after all a description: an executable description of a domain or phenomenon.

I think this pattern gives us insight into why less code is usually better than more code. Given two functionally equivalent programs of different lengths, we generally prefer the shorter program because it contains only what matters. The excess code found in the longer program obscures from the reader what is essential. Furthermore, as with Miller's concise formulas, a short program offers its reader the gift of more confident understanding and the opportunity for memorable insights.

What is not in a program can tell us a lot, too. One of the hallmarks of expert programmers is their ability to see the negative space in a design and program and learn from it. My students, who are generally novice programmers, struggle with this. They are still learning what matters and how to write descriptions at all, let alone concise one. They are still learning how to focus their programming tools, and on what.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

October 19, 2012 3:08 PM

Computer Programming, Education Reform, and Changing Our Schools

Seymour Papert

You almost can't go wrong by revisiting Seymour Papert's work every so often. This morning I read Why School Reform Is Impossible, which reminds us that reform and change are different things. When people try to "reform" education by injecting a new idea from outside, schools seem to assimilate the reform into its own structure, which from the perspective of the reformer blunts or rejects the intended reform. Yet schools and our education system do change over time, evolving as the students, culture, and other environmental factors change.

As people such as Papert and Alan Kay have long argued, a big part of the problem in school reform involving computers is that we misunderstand what a computer is:

If you ask, "Which is not like the other two?" in the list "educational movie, textbook, computer", it is pretty obvious from my perspective that the answer must be "computer."

... not "textbook", which is how most people answer, including many people who want to introduce more computers into the classroom. Textbooks and movies are devices for receiving content that someone else made. Computers are for creating content. It just so happens that we can use them to communicate ideas in new ways, too.

This misunderstanding leads people to push computers for the wrong reasons, or at least for reasons that miss their game-changing power. We sometimes here that "programming is the new Latin". Papert reminds us that the reasons we used to teach Latin in schools changed over time:

In recent times, Latin was taught in schools because it was supposed to be good for the development of general cognitive skills. Further back, it was taught because it was the language in which all scholarly knowledge was expressed, and I have suggested that computational language could come to play a similar role in relation to quite extensive areas of knowledge.

If programming is the new Latin, it's not Latin class, circa 1960, in which Latin taught us to be rigorous students. It's Latin class, circa 1860 or 1760 or 1560, in which Latin was the language of scholarly activity. As we watch computing become a central part of the language of science, communication, and even the arts and humanities, we will realize that students need to learn to read and write code because -- without that skill -- they are left out of the future.

No child left behind, indeed.

In this essay, Paper gives a short version of his discussion in Mindstorms of why we teach the quadratic equation of the parabola to every school child. He argues that its inclusion in the curriculum has more to do with its suitability to the medium of the say -- pencil and paper -- than to intrinsic importance. I'm not too sure that's true; knowing how parabolas and ellipses work is pretty important for understanding the physical world. But it is certainly true that how and when we introduce parabolas to students can change when we have a computer and a programming language at hand.

Even at the university we encounter this collision of old and new. Every student here must take a course in "quantitative reasoning" before graduating. For years, that was considered to be "a math course" by students and advisors alike. A few years ago, the CS department introduced a new course into the area, in which students can explores a lot of the same quantitative issues using computation rather than pencil and paper. With software tools for modeling and simulation, many students can approach and even begin to solve complex problems much more quickly than they could working by hand. And it's a lot more fun, too.

To make this work, of course, students have to learn a new programming language and practice using it in meaningful ways. Papert likens it to learning a natural language like French. You need to speak it and read it. He says we would need the programming analog of "the analog of a diverse collection of books written in French and access to French-speaking people".

the Scratch logo cat

The Scratch community is taking at shot at this. The Scratch website offers not only a way to download the Scratch environment and a way to view tutorials on creating with Scratch. It also offers -- front and center, the entire page, really -- links to shared projects and galleries. This gives students a chance first to be inspired by other kids and then to download and read the actual Scratch programs that enticed them. It's a great model.

The key is to help everyone see that computers are not like textbooks and televisions and movie projectors. As Mitch Resnick has said:

Computers for most people are black boxes. I believe kids should understand objects are "smart" not because they're just smart, but because someone programmed them to be smart.

What's most important ... is that young children start to develop a relationship with the computer where they feel they're in control. We don't want kids to see the computer as something where they just browse and click. We want them to see digital technologies as something they can use to express themselves.

Don't just play with other people's products. Make your own.

Changes in the world's use of computing may do more to cause schools to evolve in a new direction than anyone's educational reforms ever could. Teaching children that they can be creators and not simply consumers is a subversive first step.

~~~~

IMAGE 1: Seymour Papert at the OLPC offices in Cambridge, Massachusetts, in 2006. Source: Wikimedia Commons License: Creative Commons Attribution-Share Alike 2.0.

IMAGE 2: The Scratch logo. Source: Wikimedia Commons License: Creative Commons Attribution-Share Alike 2.0.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 17, 2012 3:32 PM

What Our Students Think of Us

Following a recommendation on Twitter, I recently read Am I A Product Of The Institutions I Attended?, the text of a speech by Amitabha Bagchi. Bagchi is a novelist who studied computer science in school. It is a reflection on what we learn in and out of school, which isn't always what our schools intend. He closes the paper with some thoughts on being a teacher.

... as all of us who have been teachers for even a short while know, all we can do is give people an opportunity to learn. And if they don't learn, we can give them another opportunity, and another.

Students learn on their schedules, not ours. All we can do is to keep providing opportunities, so that when they are ready, an opportunity awaits them.

This passage:

Like so many other teachers I spend a lot of time thinking about my students, and, also like many other teachers, I don't spend enough time thinking about what they think of me.

... launches a discussion that touched a chord in me. As a high school student, Bagchi realized that students see their teacher as a figure of authority and decorum no matter the reality on any given day. The teacher may be young, or inexperienced, or emotionally out of sorts. But to them, the teacher is The Teacher.

So there you are, you poor teacher, frozen in eternal adulthood, even on those days when you wish you could just curl into a fetal position and suck your thumb instead of having to stand up and talk for an hour to a room full of young people who are looking at you, or at least should be looking at you. Sometimes in the nitty-gritty of the syllabus, the announcements about exams and homework, the clearing of the last class's doubts, you forget about the current that emerges from your body and flows out into the class. You forget what you mean to them.

It's wise to step back occasionally and remember what your students mean to you, and you to them. Long after the details of any homework assignment or midterm exam have left our minds, these relationships remain.

(And, as important as these relationships are, they are not the most important relationship in the classroom.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 16, 2012 4:45 PM

The Parable of the OO Programming Student

As The Master was setting out on a journey, a young man ran up, knelt down before him, and asked him, "Good teacher, what must I do to inherit the eternal bliss of OO?"

The Master answered him, "Why do you call me good? No OO programmer is good but The Creator alone.

"You know the commandments: "'An object should have only a single responsibility.'

"'Software entities should be open for extension, but closed for modification.'

"'Objects should be replaceable with instances of their subtypes without altering the correctness of that program.'

"'Tell, don't ask.'

"'You shall not indulge in primitive obsession.'

"'All state is private.'"

The young man replied and said to Him, "Teacher, all of these I have observed from my youth when first I learned to program."

The Master, looking at him, loved him and said to him, "You are lacking in one thing. Go, surrender all primitive types, and renounce all control structures. Write all code as messages passed between encapsulated objects, with extreme late-binding of all things. Then will you have treasure in Heaven; then come, follow me."

At that statement the young man's face fell, and he went away sad, for he possessed many data structures and algorithms.

The Master looked around and said to his disciples, "How hard it is for those who have a wealth of procedural programming experience to enter the kingdom of OO."

... with apologies to The Gospel of Mark.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 05, 2012 4:02 PM

Ready for a Weekend

After a week in which I left the classroom one day feeling like a pretty good teacher and and another day feeling like I'll never get this teaching thing right, I was looking for some inspiration.

While looking at a really simple blog this morning, I got to thinking about writing some code for fun this weekend, something I don't get to do very often these days.

Then came an e-mail exchange with a former student now thinking about computing in another field. He has interesting thoughts about how computing can reach people doing real work in other disciplines, and how computing itself needs to change in order to be relevant in a changing world. It was just what I needed. Some days, the student is the teacher. Other days, it's the former student.

This all brought to mind a passage from Mark Edmondson's Why Read?:

True teachers of literature become teachers because their lives have been changed by reading. They want to offer others the same chance to be transformed. "What we have loved, / Others will love," says Wordsworth in The Prelude, "and we will teach them how."

That's how I feel about programming. I hope that most days my students can sense that in the classroom. It's good to know that at least occasionally a student is transformed, and just as often I am transformed again.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 30, 2012 12:45 PM

StrangeLoop 8: Reactions to Brett Victor's Visible Programming

The last talk I attended at StrangeLoop 2012 was Bret Victor's Visible Programming. He has since posted an extended version of his presentation, as a multimedia essay titled Learnable Programming. You really should read his essay and play the video in which he demonstrates the implementation of his ideas. It is quite impressive, and worthy of the discussion his ideas have engendered over the last few months.

In this entry, I give only a high-level summary of the idea, react to only one of his claims, and discuss only one of his design principles in ay detail. This entry grew much longer than I originally intended. If you would like to skip most of my reaction, jump to the mini-essay that is the heart of this entry, Programing By Reacting, in the REPL.

~~~~

Programmers often discuss their productivity as at least a partial result of the programming environments they use. Victor thinks this is dangerously wrong. It implies, he says, that the difficulty with programming is that we aren't doing it fast enough.

But speed is not the problem. The problem is that our programming environments don't help us to think. We do all of our programming in our minds, then we dump our ideas into code via the editor.

Our environments should do more. They should be our external imagination. They should help us see how our programs work as we are writing them.

This is an attractive guiding principle for designing tools to help programmers. Victor elaborates this principle into a set of five design principles for an environment:

  • read the vocabulary -- what do these words mean?
  • follow the flow -- what happens when?
  • see the state -- what is the computer thinking?
  • create by reacting -- start somewhere, then sculpt
  • create by abstracting -- start concrete, then generalize

Victor's talk then discussed each design principle in detail and showed how one might implement the idea using JavaScript and Processing.js in a web browser. The demo was cool enough that the StrangeLoop crowd broke into applause at leas twice during the talk. Read the essay.

~~~~

As I watched the talk, I found myself reacting in a way I had not expected. So many people have spoken so highly of this work. The crowd was applauding! Why was I not as enamored? I was impressed, for sure, and I was thinking about ways to use these ideas to improve my teaching. But I wasn't falling head over heels in love.

A Strong Claim

First, I was taken aback by a particular claim that Victor made at the beginning of his talk as one of the justifications for this work:

If a programmer cannot see what a program is doing, she can't understand it.

Unless he means this metaphorically, seeing "in the mind's eye", then it is simply wrong. We do understand things we don't see in physical form. We learn many things without seeing them in physical form. During my doctoral study, I took several courses in philosophy, and only rarely did we have recourse to images of the ideas we were studying. We held ideas in our head, expressed in words, and manipulated them there.

We did externalize ideas, both as a way to learn them and think about them. But we tended to use stories, not pictures. By speaking an idea, or writing it down, and sharing it with others, we could work with them.

So, my discomfort with one of Victor's axioms accounted for some of my unexpected reaction. Professional programmers can and do manipulate ideas abstractly. Visualization can help, but when is it necessary, or even most helpful?

Learning, Versus Doing

This leads to a second element of my concern. I think I had a misconception about Victor's work. His talk and its title, "Visible Programming", led me to think his ideas are aimed primarily at working programmers, that we need to make programs visible for all programmers.

The title of his essay, "Learnable Programming", puts his claims into a different context. We need to make programs visible for people who are learning to program. This seems a much more reasonable position on its face. It also lets me see the axiom that bothered me so much in a more sympathetic light: If a novice programmer cannot see what a program is doing, then she may not be able to understand it.

Seeing how a program works is a big part of learning to program. A few years ago, I wrote about "biction" and the power of drawing a picture of what code does. I often find that if I require a student to draw a picture of what his code is doing before he can ask me for debugging help, he will answer his own question before getting to me.

The first time a student experiences this can be a powerful experience. Many students begin to think of programming in a different way when they realize the power of thinking about their programs using tools other than code. Visible programming environments can play a role in helping students think about their programs, outside their code and outside their heads.

I am left puzzling over two thoughts:

  • How much of the value my students see in pictures comes from not from seeing the program work but from drawing the picture themselves -- the act of reflecting about the program? If our tools visualizes the code for them, will we see the same learning effect that we see in drawing their own pictures?

  • Certainly Victor's visible programming tools can help learners. How much will they help programmers once they become experts? Ben Shneiderman's Designing the User Interface taught me that novices and experts have different needs, and that it's often difficult to know what works well for experts until we run experiments.

Mark Guzdial has written a more detailed analysis of Victor's essay from the perspective of a computer science educator. As always, Mark's ideas are worth reading.

Programming By Reacting, in the REPL

My favorite parts of this talk were the sections on creating by reacting and abstracting. Programmers, Victor says, don't work like other creators. Painters don't stare at a blank canvas, think hard, create a painting in their minds, and then start painting the picture they know they want to create. Sculptors don't stare at a block of stone, envision in their mind's eye the statue they intend to make, and then reproduce that vision in stone. They start creating, and react, both to the work of art they are creating and to the materials they are using.

Programmers, Victor says, should be able to do the same thing -- if only our programming environments helped us.

As a teacher, I think this is an area ripe for improvement in how we help students learn to program. Students open up their text editor or IDE, stare at that blank screen, and are terrified. What do I do now? A lot of my work over the last fifteen to twenty years has been in trying to find ways to help students get started, to help them to overcome the fear of the blank screen.

My approaches haven't been through visualization, but through other ways to think about programs and how we grow them. Elementary patterns can give students tools for thinking about problems and growing their code at a scale larger than characters or language keywords. An agile approach can help them start small, add one feature at a time, proceed in confidence with working tests, and refactor to make their code better as they go along. Adding Victor-style environment support for the code students write in CS1 and CS2 would surely help as well.

However, as I listened to Victor describe support for creating by reacting, and then abstracting variables and functions out of concrete examples, I realized something. Programmers don't typically write code in an environment with data visualizations of the sort Victor proposes, but we do program in the style that such visualizations enable.

We do it in the REPL!

A simple, interactive computer programming environment enables programmers to create by reacting.

  • They write short snippets of code that describe how a new feature will work.
  • They test the code immediately, seeing concrete results from concrete examples.
  • They react to the results, shaping their code in response to what the code and its output tell them.
  • They then abstract working behaviors into functions that can be used to implement another level of functionality.

Programmers from the Lisp and Smalltalk communities, and from the rest of the dynamic programming world, will recognize this style of programming. It's what we do, a form of creating by reacting, from concrete examples in the interaction pane to code in the definitions pane.

In the agile software development world, test-first development encourages a similar style of programming, from concrete examples in the test case to minimal code in the application class. Test-driven design stimulates an even more consciously reactive style of programming, in which the programmer reacts both to the evolving program and to the programmer's evolving understanding of it.

The result is something similar to Victor's goal for programmers as they create abstractions:

The learner always gets the experience of interactively controlling the lower-level details, understanding them, developing trust in them, before handing off that control to an abstraction and moving to a higher level of control.

It seems that Victor would like to perform even more support for novices than these tools can provide, down to visualizing what the program does as they type each line of code. IDEs with autocomplete is perhaps the closest analog in our current arsenal. Perhaps we can do more, not only for novices but also professionals.

~~~~

I love the idea that our environments could do more for us, to be our external imaginations.

Like many programmers, though, as I watched this talk, I occasionally wondered, "Sure, this works great if you creating art in Processing. What about when I'm writing a compiler? What should my editor do then?"

Victor anticipated this question and pre-emptively answered it. Rather than asking, How does this scale to what I do?, we should turn the question inside out and ask, These are the design requirements for a good environment. How do we change programming to fit?

I doubt such a dogmatic turn will convince skeptics with serious doubts about this approach.

I do think, though, that we can reformulate the original question in a way that focuses on helping "real" programmers. What does a non-graphical programmer need in an external imagination? What kind of feedback -- frequent, even in-the-moment -- would be most helpful to, say, a compiler writer? How could our REPLs provide even more support for creating, reacting, and abstracting?

These questions are worth asking, whatever one thinks of Victor's particular proposal. Programmers should be grateful for his causing us to ask them.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 20, 2012 8:09 PM

Computer Science is a Liberal Art

Over the summer, I gave a talk as part of a one-day conference on the STEM disciplines for area K-12, community college, and university advisors. They were interested in, among other things, the kind of classes that CS students take at the university and the kind of jobs they get when they graduate.

In the course of talking about how some of the courses our students take (say, algorithms and the theory of computing) seem rather disconnected from many of the jobs they get (say, web programmer and business analyst), I claimed that the more abstract courses prepare students to understand the parts of the computing world that never change, and the ones that do. The specific programming languages or development stack they use after they graduate to build financial reporting software may change occasionally, but the foundation they get as a CS major prepares them to understand what comes next and to adapt quickly.

In this respect, I said, a university CS education is not job training. Computer Science is a liberal art.

This is certainly true when you compare university CS education with what students get at a community college. Students who come out of a community college networking program often possess specific marketable skills at a level we are hard-pressed to meet in a university program. We bank our program's value on how well it prepares students for a career, in which networking infrastructure changes multiple times and our grads are asked to work at the intersection of networks and other areas of computing, some of which may not exist yet.

It is also true relative to the industries they enter after graduation. A CS education provides a set of basic skills and, more important, several ways to think about problems and formulate solutions. Again, students who come out of a targeted industry or 2-year college training program in, say, web dev, often have "shovel ready" skills that are valuable in industry and thus highly marketable. We bank our program's value on how well it prepares students for a career in which ASP turns to JSP turns PHP turns to JavaScript. Our students should be prepared to ramp up quickly and have a shovel in the hands producing value soon.

And, yes, students in a CS program must learn to write code. That's a basic skill. I often hear people comment that computer science programs do not prepare students well for careers in software development. I'm not sure that's true, at least at schools like mine. We can't get away with teaching all theory and abstraction; our students have to get jobs. We don't try to teach them everything they need to know to be good software developers, or even many particular somethings. That should and will come on the job. I want my students to be prepared for whatever they encounter. If their company decides to go deep with Scala, I'd like my former students to be ready to go with them.

In a comment on John Cook's timely blog entry How long will there be computer science departments?, Daniel Lemire suggests that we emulate the model of medical education, in which doctors serve several years in residency, working closely with experienced doctors and learning the profession deeply. I agree. Remember, though, that aspiring doctors go to school for many years before they start residency. In school, they study biology, chemistry, anatomy, and physiology -- the basic science at the foundation of their profession. That study prepares them to understand medicine at a much deeper level than they otherwise might. That's the role CS should play for software developers.

(Lemire also smartly points out that programmers have the ability to do residency almost any time they like, by joining an open source project. I love to read about how Dave Humphrey and people like him bring open-source apprenticeship directly into the undergrad CS experience and wonder how we might do something similar here.)

So, my claim that Computer Science is a liberal arts program for software developers may be crazy, but it's not entirely crazy. I am willing to go even further. I think it's reasonable to consider Computer Science as part of the liberal arts for everyone.

I'm certainly not the first person to say this. In 2010, Doug Baldwin and Alyce Brady wrote a guest editors' introduction to a special issue of the ACM Transactions on Computing Education called Computer Science in the Liberal Arts. In it, they say:

In late Roman and early medieval times, seven fields of study, rooted in classical Greek learning, became canonized as the "artes liberales" [Wagner 1983], a phrase denoting the knowledge and intellectual skills appropriate for citizens free from the need to labor at the behest of others. Such citizens had ample leisure time in which to pursue their own interests, but were also (ideally) civic, economic, or moral leaders of society.

...

[Today] people ... are increasingly thinking in terms of the processes by which things happen and the information that describes those processes and their results -- as a computer scientist would put it, in terms of algorithms and data. This transformation is evident in the explosion of activity in computational branches of the natural and social sciences, in recent attention to "business processes," in emerging interest in "digital humanities," etc. As the transformation proceeds, an adequate education for any aspect of life demands some acquaintance with such fundamental computer science concepts as algorithms, information, and the capabilities and limitations of both.

The real value in a traditional Liberal Arts education is in helping us find better ways to live, to expose us to the best thoughts of men and women in hopes that we choose a way to live, rather than have history or accident choose a way to live for us. Computer science, like mathematics, can play a valuable role in helping students connect with their best aspirations. In this sense, I am comfortable at least entertaining the idea that CS is one of the modern liberal arts.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

September 16, 2012 9:17 AM

An Advantage To Needing to Ask For Help

A week or so ago I tweeted about a student who came by the office for some help with a programming assignment. When told that he'd be hearing this explanation again in class later in the day, he said, that's okay, "that way I get to learn it better".

This episode came to mind again when I wrote this paragraph in my entry yesterday about learning to talk about their programs, the ideas they embody, and the reasons they wrote the code they did:

To learn learn such skills, students need practice. The professor needs to ask open-ended questions in class and out. Asking good questions outside of class is especially important because it's in the hallway, the lab, and office hours where students find themselves one on one with the professor and have to answer the questions themselves. They can't fall back on the relative anonymity of even a small class to avoid the hard work of forming a thought and trying to say it out loud.

This is another good reason for students to go to office hours and otherwise to engage with the professor outside of class: Not only do they get answers to the questions. they also get more and better practice talking about problems and solutions than students who don't.

This offers an unexpected advantage to the student who doesn't quite "get it" yet over the student who just barely gets it: The former might well come in to get help. The latter probably won't. The former gets more interaction and more individualized practice. The latter gets neither.

A lot of professors encourage all students to talk to them about their programs. Obviously, students doing poorly obviously can benefit from help. Students at the top of the curve are ones who often can benefit from going beyond what they see in class, going farther or deeper. But perhaps we should work even harder to encourage an unlikely group of students to come see us: the ones who are doing just well enough. They don't stand out in any way that makes the prof seek them out, and that may place them at risk of losing ground to other students.

That's a tough sell, though. It's human nature not to seek help when we don't think we need it. If we seem to understand the material and are doing okay on the assignments, do we really need help? There are also pragmatic issues like time management. Student who are doing okay in my course are likely to focus their efforts on other courses, where they feel like they need more work and help.

So, it becomes even more important for the professor to engage all of the students in a course, both as a group and as individuals, in reflective thinking, speaking, and writing. Otherwise, some students who need practice with these skills might not seek it out on their own.

Most of us know that there was an advantage to being willing to ask questions. Rarely do we think about how there might be an advantage to needing to ask questions.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 15, 2012 10:04 AM

Express Yourself

One of my CS colleagues gives only multiple-choice and true/false tests. He swears by how well scores on them track with performance on programming assignments, which he views as a valuable form of consistency. And, boy, are they easy to grade.

I'm not a fan. I can't recall the last I time used them. If I ever did, it was surely for a narrow purpose, say, to test quickly the student's familiarity with vocabulary. "Choose one" questions are too limiting to be used wholesale, because they test only recognition, not recall. They certainly can't test the synthetic skills we need as programmers and solvers of problems, and those are the most important part of computer science.

My course this semester has brought this idea to the forefront of my mind again. I've helped students a lot on their first two programming assignments, as they learn a little Java and revisit the concept of modularity they learned in their Data Structures course. I've been struck by how often they cannot talk about their code.

When I ask "What is this statement doing?" or "Why are you <x>?", many struggle. They want to answer in terms of the code they have written, either reading it back to me in English or telling me the code that surrounds it. This is true for almost any value of <x>. This week, "assigning a value to this variable" and "using an if statement" were common topics.

Most of these students had a reason that they wrote the code they did, even if it was merely a reaction to a specific behavior they observed in the compiler or executable. But even in the best cases the idea is usually not clear in their minds. Unclear ideas are nebulous and rambling. This is why the students have so much trouble trying to express them at all, let alone concisely.

Perhaps it's not surprising that students at this level struggle with this problem. Most are coming directly out of their first-year courses, where they learned a single programming language and used it to build data structures and small programs. They are often able to solve the tasks we set before them in these courses without having to understand their own programs at a deep level. The tasks are relatively simple, and the programs are small enough that bits of code cobbled together from examples they've seen get the job done.

It takes practice to learn how to think at a higher level, to express ideas about solutions and code without speaking in terms of code, or even the solutions they have settled on.

To learn learn such skills, students need practice. The professor needs to ask open-ended questions in class and out. Asking good questions outside of class is especially important because it's in the hallway, the lab, and office hours where students find themselves one on one with the professor and have to answer the questions themselves. They can't fall back on the relative anonymity of even a small class to avoid the hard work of forming a thought and trying to say it out loud.

They also need practice in the form of homework questions, quizzes, and even tests. Each of these exacts an increasing level of accountability, which creates an increasing sense of urgency for their efforts. One of my favorite experiences as a teacher is when students visit after a course is over and tells me the pride they felt while taking the final exam and finally -- finally! -- feeling like they could express the thoughts they had in their minds. That's a proud moment for the teacher, too.

The first four weeks of this course reminds me that one of the essential goals for the intermediate computing course is to strengthen the skills needed to reason about ideas and express them clearly. When students make the move to a second programming language and a second programming style, they have to come to grips with the fact that programming is not just about Python or Ada anymore. In fact, it never was.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 11, 2012 3:47 PM

The Kind of Teacher We All Want To Have

In an old Geek of the Week interview, the interviewer asks Donald Knuth what gets him started writing. Knuth gives an answer that puts his writing in the context of teaching someone who is reading the book because she wants to, not because she must. His answer culminates in:

Instead of trying to impress the reader with what I know, I try to explain why the things I've learned impress me.

That's not only the kind of teacher we all want to have, it's the kind of teacher we all want want to be.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 05, 2012 5:24 PM

Living with the Masters

I sometimes feel guilty that most of what I write here describes connections between teaching or software development and what I see in other parts of the world. These connections are valuable to me, though, and writing them down is valuable in another way.

I'm certainly not alone. In Why Read, Mark Edmondson argues for the value of reading great literature and trying on the authors' view of the world. Doing so enables us to better understand our own view of the world, It also gives us the raw material out of which to change our worldview, or build a new one, when we encounter better ideas. In the chapter "Discipline", Edmondson writes:

The kind of reading that I have been describing here -- the individual quest for what truth a work reveals -- is fit for virtually all significant forms of creation. We can seek vital options in any number of places. They may be found for this or that individual in painting, in music, in sculpture, in the arts of furniture making or gardening. Thoreau felt he could derive a substantial wisdom by tending his bean field. He aspired to "know beans". He hoed for sustenance, as he tells us, but he also hoed in search of tropes, comparisons between what happened in the garden and what happened elsewhere in the world. In his bean field, Thoreau sought ways to turn language -- and life -- away from old stabilities.

I hope that some of my tropes are valuable to you.

The way Edmondson writes of literature and the liberal arts applies to the world of software in a much more direct ways too. First, there is the research literature of computing and software development. One can seek truth in the work of Alan Kay, David Ungar, Ward Cunningham, or Kent Beck. One can find vital options in the life's work of Robert Floyd, Peter Landin, or Alan Turing; Herbert Simon, Marvin Minsky, or John McCarthy. I spent much of my time in grad school immersed in the writings and work of B. Chandrasekaran, which affected my view of intelligence in both humans and machines.

Each of these people offers a particular view into a particular part of the computing world. Trying out their worldviews can help us articulate our own worldviews better, and in the process of living their truths we sometimes find important new truths for ourselves.

We in computing need not limit ourselves to the study of research papers and books. As Edmondson says the individual quest for the truth revealed in a work "is fit for virtually all significant forms of creation". Software is a significant form of creation, one not available to our ancestors even sixty years ago. Live inside any non-trivial piece of software for a while, especially one that has withstood the buffets of human desire over a period of time, and you will encounter truth -- truths you find there, and truths you create for yourself. A few months trying on Smalltalk and its peculiar view of the world taught me OOP and a whole lot more.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 04, 2012 4:31 PM

The Trust Between Student and Teacher

or, "Where's Your Github Repo?"

The last couple of class sessions have taught me a lot about what my students know and what they are ready to learn about programming. As a result, I'm adjusting my expectations and plans for the course. Teachers have to be agile.

A couple of my students are outliers. They have a lot of programming experience, sometimes with relatively exotic languages like Clojure. I hope that their time with me this semester is as useful to them as it is to less experienced students. Even the more advanced students usually have a lot to learn about building big systems, and especially about OOP.

After an interesting conversation with one of the more advanced students today after class, I was reminded of the role that trust plays in learning. A student has to trust that his or her professor knows enough of the right stuff to teach the class.

Most students trust their professors implicitly, based on their academic degrees and their employment at a university. (*) That makes teaching much easier. If I as a teacher start every new topic or even every course having to establish my credibility, or build trust from scratch, progress is slow.

Once the course gets going, every interaction between professor or student either reinforces that implicit trust or diminishes it. That's one of the most important features of every class day, and one we professors don't always think about explicitly as we prepare.

Working with more advanced students can be a challenge, especially in a course aimed at less experienced students. Much of what we talk about in class is at a more basic level than the one on which the advanced students are already working. "Why listen to a prof talk about the design of a method when I've already written thousands of lines of code in a complex language?"

That's a fair question. It must be tough for some students at that level to accord the same level of implicit trust to the professor as a beginner. This wasn't a problem for me when I was among the more advanced students in the class. I found it easy to trust my professors' expertise. That's how I was raised. But not all students have that initial disposition.

I've learned over the years to empathize more in this regard with the experienced students in my classes and to make efforts to earn and maintain their trust. Without that, I have little chance of helping them to learn anything in my course.

Of course, we also live increasingly in a world in which the tools of our trade give us a vehicle for establishing trust. I love it when students or prospective students ask me, "What are you working on in your spare time?" I'm waiting for the day when our upper-division students routinely ask their profs, "Where's your Github repo?"

~~~~

(*) The department head in me is acutely aware that that the department also puts its credibility on the line every time a professor walks in the classroom. That is a big challenge even for departments with strong faculties.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 01, 2012 10:18 AM

Making Assumptions

a dartboard

Patrick Honner has been writing a series of blog posts reviewing problems from the June 2012 New York State Math Regents exams. A recent entry considered a problem in which students were asked to compute the probability that a dart hits the bull's eye on a dartboard. This question requires the student to make a specific assumption: "that every point on the target is equally likely to be hit". Honner writes:

... It's not necessarily bad that we make such assumptions: refining and simplifying problems so they can be more easily analyzed is a crucial part of mathematical modeling and problem solving.

What's unfortunate is that, in practice, students are kept outside this decision-making process: how and why we make such assumptions isn't emphasized, which is a shame, because exploring such assumptions is a fundamental mathematical process.

The same kinds of assumptions are built into even the most realistic problems that we set before our students. But discussing assumptions is an essential part of doing math. Which assumptions are reasonable? Which are necessary? What is the effect of a particular assumption on the meaning of the problem, on the value of the answer we will obtain? This kind of reasoning is, in many ways, the real math in a problem. Once we have a formula or two, we are down to crunching numbers. That's arithmetic.

Computer science teachers face the risks when we pose problems to our students, including programming problems. Discovering the boundaries of a problem and dealing with the messy details that live on the fringe are an essential part of making software. When we create assignments that can be neatly solved in a week or two, we hide "a fundamental computing process" from our students. We also rob them of a lot of fun.

As Honner says, though, making assumptions is not necessarily bad. In the context of teaching a course, they are necessary. Sometimes, we need to focus our students' attention on a specific new skill to be learned or honed. Tidying up the boundaries of a problem bring that skill into greater relief and eliminate what are at the moment unnecessary distractions.

It is important, though, for a computing curriculum to offer students increasing opportunities to confront the assumptions we make and begin to make assumptions for themselves. That level of modeling is also a specific skill to be learned and honed. It also can make class more fun for the professor, if a lot messier when it comes time to evaluating student work and assigning grades.

Even when we have to make assumptions prior to assigning a problem, discussing them explicitly with students can open their eyes to the rest of the complexity in making software. Besides, some students already sense or know that we are hiding details from them, and having the discussion is a way to honor their knowledge -- and earn their respect.

So, the next time you assign a problem, ask yourself: What assumptions have I made in simplifying this problem? Are they necessary? If not, can I loosen them? If yes, can my students benefit from discussing them?

And be prepared... If you leave a few messy assumptions lying around a problem for your students to confront and make on their own, some students will be unhappy with you. As Honner says, we teachers spend a lot of time training students to make implicit assumptions unthinkingly. In some ways, we are too successful for our own good.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 31, 2012 3:22 PM

Two Weeks Along the Road to OOP

The month has flown by, preparing for and now teaching our "intermediate computing" course. Add to that a strange and unusual set of administrative issues, and I've found no time to blog. I did, however manage to post what has become my most-retweeted tweet ever:

I wish I had enough money to run Oracle instead of Postgres. I'd still run Postgres, but I'd have a lot of cash.

That's an adaptation of tweet originated by @petdance and retweeted my way by @logosity. I polished it up, sent it off, and -- it took off for the sky. It's been fun watching its ebb and flow, as it reaches new sub-networks of people. From this experience I must learn at least one lesson: a lot of people are tired of sending money to Oracle.

The first two weeks of my course have led the students a few small steps toward object-oriented programming. I am letting the course evolve, with a few guiding ideas but no hard-and-fast plan. I'll write about the course's structure after I have a better view of it. For now, I can summarize the first four class sessions:

  1. Run a simple "memo pad" app, trying to identify behavior (functions) and state (persistent data). Discuss how different groupings of the functions and data might help us to localize change.
  2. Look at the code for the app. Discuss the organization of the functions and data. See a couple of basic design patterns, in particular the separation of model and view.
  3. Study the code in greater detail, with a focus on the high-level structure of an OO program in Java.
  4. Study the code in greater detail, with a focus on the lower-level structure of classes and methods in Java.

The reason we can spend so much time talking about a simple program is that students come to the course without (necessarily) knowing any Java. Most come with knowledge of Python or Ada, and their experiences with such different languages creates an interesting space in which to encounter Java. Our goal this semester is for students to learn their second language as much as possible, rather than having me "teach" it to them. I'm trying to expose them to a little more of the language each day, as we learn about design in parallel. This approach works reasonably well with Scheme and functional programming in a programming languages course. I'll have to see how well it works for Java and OOP, and adjust accordingly.

Next week we will begin to create things: classes, then small systems of classes. Homework 1 has them implementing a simple array-based class to an interface. It will be our first experience with polymorphic objects, though I plan to save that jargon for later in the course.

Finally, this is the new world of education: my students are sending me links to on-line sites and videos that have helped them learn programming. They want me to check them and and share with the other students. Today I received a link to The New Boston, which has among its 2500+ videos eighty-seven beginning Java and fifty-nine intermediate Java titles. Perhaps we'll come to a time when I can out-source all instruction on specific languages and focus class time on higher-level issues of design and programming...


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 27, 2012 12:53 PM

My Lack of Common Sense

In high school, I worked after school doing light custodial work for a local parochial school for a couple of years. One summer, a retired guy volunteered to lead me and a couple of other kids doing maintenance projects at the school and church.

One afternoon, he found me in trying to loosen the lid on a paint can using one of my building keys. Yeah, that was stupid. He looked at me as if I were an alien, got a screwdriver, and opened the can.

Later that summer, I overheard him talking to the secretary. She asked how I was working out, and he said something to the effect of "nice kid, but he has no common sense".

That stung. He was right, of course, but no one likes to be thought of as not capable, or not very smart. Especially someone who likes to be someone who knows stuff.

I still remember that eavesdropped conversation after all these years. I knew just what he meant at the time, and I still do. For many years I wondered, what was wrong with me?

It's true that I didn't have much common sense as a handyman back then. To be honest, I probably still don't. I didn't have much experience doing such projects before I took that job. It's not something I learned from my dad. I'd never seen a bent key before, at least not a sturdy house key or car key, and I guess it didn't occur to me that one could bend.

The A student in me wondered why I hadn't deduced the error of my ways from first principles. As with the story of Zog, it was immediately obvious as soon as it was pointed out to me. Explanation-based learning is for real.

Over time, though, I have learned to cut myself some slack. Voltaire was right: Common sense is not so common. These days, people often say that to mean there are far too many people like me who don't have the sense to come in out of the rain. But, as the folks at Wikipedia recognize, that sentence can mean something else even more important. Common sense isn't shared whenever people have not had the same experiences, or have not learned it some other way.

Maybe there are still some things that most of us can count on as common, by virtue of living in a shared culture. But I think we generally overestimate how much of any given person's knowledge is like that. With an increasingly diverse culture, common experience and common cultural exposure are even harder to come by.

That gentleman and secretary probably forgot about their conversation within minutes, but the memory of his comment still stings a little. I don't think I'd erase the memory, though, even if I could. Every so often, it reminds me not to expect my students to have too much common sense about programs or proofs or programming languages or computers.

Maybe they just haven't had the right experiences yet. It's my job to help them learn.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

August 13, 2012 3:56 PM

Lessons from Unix for OO Design

Pike's and Kernighan's Program Design in the UNIX Environment includes several ideas I would like for my students to learn in my Intermediate Computing course this fall. Among them:

... providing the function in a separate program makes convenient options ... easier to invent, because it isolates the problem as well as the solution.

In OO, objects are the packages that create possibilities for us. The beauty of this lesson is the justification: because a class isolates the problem as well as the solution.

This solution affects no other programs, but can be used with all of them.

This is one of the great advantages of polymorphic objects.

The key to problem-solving on the UNIX system is to identify the right primitive operations and to put them at the right place.

Methods should live in the objects whose data they manipulate. One of the hard lessons for novice OO programmers coming from a procedural background is putting methods with the thing, not a faux actor.

UNIX programs tend to solve general problems rather than special cases.

Objects that are too specific should be rare, at least for beginning programmers. Specificity in interface often indicates that implementation detail is leaking out.

Merely adding features does not make it easier for users to do things -- it just makes the manual thicker.

Keep objects small and focused. A big interface is often evidence of an object waiting to be born.

~~~~

In many ways, The Unix Way is contrary to object-oriented programming. Or so many of Linux friends tell me. But I'm quite comfortable with the parallels found in these quotes, because they are more about good design in general than about Unix or OOP themselves.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 09, 2012 1:36 PM

Sentences to Ponder

In Why Read?, Mark Edmundson writes:

A language, Wittgenstein thought, is a way of life. A new language, whether we learn it from a historian, a poet, a painter, or a composer of music, is potentially a new way to live.

Or from a programmer.

In computing, we sometimes speak of Perlis languages, after one of Alan Perlis's best-known epigrams: A language that doesn't affect the way you think about programming is not worth knowing. A programming language can change how we think about our craft. I hope to change how my students think about programming this fall, when I teach them an object-oriented language.

But for those of us who spend our days and nights turning ideas into programs, a way of thinking is akin to a way of life. That is why the wider scope of Wittgenstein's assertion strikes me as so appropriate for programmers.

Of course, I also think that programmers should follow Edmundson's advice and learn new languages from historians, writers, and artists. Learning new ways to think and live isn't just for humanities majors.

(By the way, I'm enjoying reading Why Read? so far. I read Edmundson's Teacher many years ago and recommend it highly.)


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 08, 2012 1:50 PM

Examples First, Names Last

Earlier this week, I reviewed a draft chapter from a book a friend is writing, which included a short section on aspect-oriented programming. The section used common AOP jargon: "cross cutting", "advice", and "point cut". I know enough about AOP to follow his text, but I figured that many of his readers -- young software developers from a variety of backgrounds -- would not. On his use of "cross cutting", I commented:

Your ... example helps to make this section concrete, but I bet you could come up with a way of explaining the idea behind AOP in a few sentences that would be (1) clear to readers and (2) not use "cross cutting". Then you could introduce the term as the name of something they already understand.

This may remind you of the famous passage from Richard Feynman about learning names and understanding things. (It is also available on a popular video clip.) Given that I was reviewing a chapter for a book of software patterns, it also brought back memories of advice that Ralph Johnson gave many years ago on the patterns discussion list. Most people, he said, learn best from concrete examples. As a result, we should write software patterns in such a way that we lead with a good example or two and only then talk about the general case. In pattern style, he called this idea "Concrete Before Abstract".

I try to follow this advice in my teaching, though I am not dogmatic about it. There is a lot of value in mixing up how we organize class sessions and lectures. First, different students connect better with some approaches than others, so variety increases the chances that of connecting with everyone a few times each semester. Second, variety helps to keep students in interested, and being interested is a key ingredient in learning.

Still, I have a preference for approaches that get students thinking about real code as early as possible. Starting off by talking about polymorphism and its theoretical forms is a lot less effective at getting the idea across to undergrads than showing students a well-chosen example or two of how plugging a new object into an application makes it easier to extend and modify programs.

So, right now, I have "Concrete Before Abstract" firmly in mind as I prepare to teaching object-oriented programming to our sophomores this fall.

Classes start in twelve days. I figured I'd be blogging more by now about my preparations, but I have been rethinking nearly everything about the way I teach the course. That has left my mind more muddled that settled for long stretches. Still, my blog is my outboard brain, so I should be rethinking more in writing.

I did have one crazy idea last night. My wife learned Scratch at a workshop this summer and was talking about her plans to use it as a teaching tool in class this fall. It occurred to me that implementing Scratch would be a fun exercise for my class. We'll be learning Java and a little graphics programming as a part of the course, and conceptually Scratch is not too many steps from the pinball game construction kit in Budd's Understanding Object-Oriented Programming with Java, the textbook I have used many times in the course. I'm guessing that Budd's example was inspired by Bill Budge's game for Electronic Arts, Pinball Construction Set. (Unfortunately, Budd's text is now almost as out of date as that 1983 game.)

Here is an image of a game constructed using the pinball kit and Java's AWT graphics framework:

a pinball game constructed using a simple game kit

The graphical ideas needed to implement Scratch are a bit more complex, including at least:

  • The items on the canvas must be clickable and respond to messages.
  • Items must be able to "snap" together to create units of program. This could happen when a container item such as a choice or loop comes into contact with an item it is to contain.

The latter is an extension of collision-detecting behavior that students would be familiar with from earlier "ball world" examples. The former is something we occasionally do in class anyway; it's awfully handy to be able to reconfigure the playing field after seeing how the game behaves with the ball in play. The biggest change would be that the game items are little pieces of program that know how to "interpret" themselves.

As always, the utility of a possible teaching idea lies in the details of implementing it. I'll give it a quick go over the next week to see if it's something I think students would be able to handle, either as a programming assignment or as an example we build and discuss in class.

I'm pretty excited by the prospect, though. If this works out, it will give me a nice way to sneak basic language processing into the course in a fun way. CS students should see and think about languages and how programs are processed throughout their undergrad years, not only in theory courses and programming languages courses.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

August 03, 2012 3:23 PM

How Should We Teach Algebra in 2012?

Earlier this week, Dan Meyer took to task a New York Times opinion piece from the weekend, Is Algebra Necessary?:

The interesting question isn't, "Should every high school graduate in the US have to take Algebra?" Our world is increasingly automated and programmed and if you want any kind of active participation in that world, you're going to need to understand variable representation and manipulation. That's Algebra. Without it, you'll still be able to clothe and feed yourself, but that's a pretty low bar for an education. The more interesting question is, "How should we define Algebra in 2012 and how should we teach it?" Those questions don't even seem to be on Hacker's radar.

"Variable representation and manipulation" is a big part of programming, too. The connection between algebra and programming isn't accidental. Matthias Felleisen won the ACM's Outstanding Educator Award in 2010 for his long-term TeachScheme! project, which has now evolved into Program by Design. In his SIGCSE 2011 keynote address, Felleisen talked about the importance of a smooth progression of teaching languages. Another thing he said in that talk stuck with me. While talking about the programming that students learned, he argued that this material could be taught in high school right now, without displacing as much material as most people think. Why? Because "This is algebra."

Algebra in 2012 still rests fundamentally on variable representation and manipulation. How should we teach it? I agree with Felleisen. Programming.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 23, 2012 3:14 PM

Letting Go of Old Strengths

Ward Cunningham commented on what it's like to be "an old guy who's still a programmer" in his recent Dr. Dobb's interview:

A lot of people think that you can't be old and be good, and that's not true. You just have to be willing to let go of the strengths that you had a year ago and get some new strengths this year. Because it does change fast, and if you're not willing to do that, then you're not really able to be a programmer.""

That made me think of the last comment I made in my posts on JRubyConf:

There is a lot of stuff I don't know. I won't run out of things to read and learn and do for a long, long, time.

This is an ongoing theme in the life of a programmer, in the life of a teacher, and the life of an academic: the choice we make each day between keeping up and settling down. Keeping up is a lot more fun, but it's work. If you aren't comfortable giving up what you were awesome at yesterday, it's even more painful. I've been lucky mostly to enjoy learning new stuff more than I've enjoyed knowing the old stuff. May you be so lucky.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 16, 2012 3:02 PM

Refactoring Everywhere: In Code and In Text

Charlie Stross is a sci-fi writer. Some of my friends have recommended his fiction, but I've not read any. In Writing a novel in Scrivener: lessons learned, he, well, describes what he has learned writing novels using Scrivener, an app for writers well known in the Mac OS X world.

I've used it before on several novels, notably ones where the plot got so gnarly and tangled up that I badly needed a tool for refactoring plot strands, but the novel I've finished, "Neptune's Brood", is the first one that was written from start to finish in Scrivener...

... It doesn't completely replace the word processor in my workflow, but it relegates it to a markup and proofing tool rather than being a central element of the process of creating a book. And that's about as major a change as the author's job has undergone since WYSIWYG word processing came along in the late 80s....

My suspicion is that if this sort of tool spreads, the long-term result may be better structured novels with fewer dangling plot threads and internal inconsistencies. But time will tell.

Stross's lessons don't all revolve around refactoring, but being able to manage and manipulate the structure of the evolving novel seems central to his satisfaction.

I've read a lot of novels that seemed like they could have used a little refactoring. I always figured it was just me.

The experience of writing anything in long form can probably be improved by a good refactoring tool. I know I find myself doing some pretty large refactorings when I'm working on the set of lecture notes for a course.

Programmers and computer scientists have the advantage of being more comfortable writing text in code, using tools such as LaTex and Scribble, or homegrown systems. My sense, though, is that fewer programmers use tools like this, at least at full power, than might benefit from doing so.

Like Stross, I have a predisposition against using tools with proprietary data formats. I've never lost data stored in plaintext to version creep or application obsolescence. I do use apps such as VoodooPad for specific tasks, though I am keenly aware of the exit strategy (export to text or RTFD ) and the pain trade-off at exit (the more VoodooPad docs I create, the more docs I have to remember to export before losing access to the app). One of the things I like most about MacJournal is that it's nothing but a veneer over a set of Unix directories and RTF documents. The flip side is that it can't do for me nearly what Scrivener can do.

Thinking about a prose writing tool that supports refactoring raises an obvious question: what sort of refactoring operations might it provide automatically? Some of the standard code refactorings might have natural analogues in writing, such as Extract Chapter or Inline Digression.

Thinking about automated support for refactoring raises another obvious question, the importance of which is surely as clear to novelists as to software developers: Where are the unit tests? How will we know we haven't broken the story?

I'm not being facetious. The biggest fear I have when I refactor a module of a course I teach is that I will break something somewhere down the line in the course. Your advice is welcome!


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 14, 2012 11:01 AM

"Most Happiness Comes From Friction"

Last time, I mentioned again the value in having students learn broadly across the sciences and humanities, including computer science. This is a challenge going in both directions. Most students like to concentrate on one area, for a lot of different reasons. Computer science looks intimidating to students in other majors, perhaps especially to the humanities-inclined.

There is hope. Earlier this year, the Harvard Magazine ran The Frisson of Friction, an essay by Sarah Zhang, a non-CS student who decided to take CS 50, Harvard's intro to computer science. Zhang tells the story of finding a thorny, semicolon-induced bug in a program (an extension for Google's Chrome browser) on the eve of her 21st birthday. Eventually, she succeeded. In retrospect, she writes:

Plenty of people could have coded the same extension more elegantly and in less time. I will never be as good a programmer as -- to set the standard absurdly high -- Mark Zuckerberg. But accomplishments can be measured in terms relative to ourselves, rather than to others. Rather than sticking to what we're already good at as the surest path to résumé-worthy achievements, we should see the value in novel challenges. How else will we discover possibilities that lie just beyond the visible horizon?

... Even the best birthday cake is no substitute for the deep satisfaction of accomplishing what we had previously deemed impossible -- whether it's writing a program or writing a play.

The essay addresses some of the issues that keep students from seeking out novel challenges, such as fear of low grades and fear of looking foolish. At places like Harvard, students who are used to succeeding find themselves boxed in by their friends' expectations, and their own, but those feelings are familiar to students at any school. Then you have advisors who subtly discourage venturing too far from the comfortable, out of their own unfamiliarity and fear. This is a social issue as big as any pedagogical challenge we face in trying to make introductory computer science more accessible to more people.

With work, we can help students feel the deep satisfaction that Zhang experienced. Overcoming challenges often leads to that feeling. She quotes a passage about programmers in Silicon Valley, who thrive on such challenges: "Most happiness probably comes from friction." Much satisfaction and happiness come out of the friction inherent in making things. Writing prose and writing programs share this characteristic.

Sharing the deep satisfaction of computer science is a problem with many facets. Those of us who know the satisfaction know it's a problem worth solving.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

July 05, 2012 2:48 PM

The Value of a Liberal Education, Advertising Edition

A few days ago, I mentioned James Webb Young's A Technique for Producing Ideas. It turns out that Young was in advertising. He writes:

The construction of an advertisement is the construction of a new pattern in this kaleidoscopic world in which we live. The more of the elements of that world which are stored away in that pattern-making machine, the mind, the more the chances are increased for the production of new and striking combinations, or ideas. Advertising students who get restless about the "practical" value of general college subjects might consider this.

Computer Science students, too.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 03, 2012 3:28 PM

A Little Zen, A Little Course Prep

I listened to about 3/4 of Zen and the Art of Motorcycle Maintenance on a long-ish drive recently. It's been a while since I've read the whole book, but I listen to it on tape once a year or so. It always gets my mind in the mood to think about learning to read, write, and debug programs.

This fall, I will be teaching our third course for first time since became head seven years ago. In that time, we changed the name of the course from "Object-Oriented Programming" to "Intermediate Computing". In many ways, the new name is an improvement. We want students in this course to learn a number of skills and tools in the service of writing larger programs. At a fundamental level, though OOP remains the centerpiece of everything we do in the course.

As I listened to Pirsig make his way across the Great Plains, a few ideas stood out as prepare to teach one of my favorite courses:

The importance of making your own thing, not just imitating others. This is always a challenge in programming courses, but for most people it is essential if we hope for students to maximize their learning. It underlies several other parts of Pirsig's zen and art, such as caring about our artifacts, and the desire to go beyond what something is to what it means.

The value of reading code, both good and bad. Even after only one year of programming, most students have begun to develop a nose for which is which, and nearly all have enough experience that they can figure out the difference with minimal interference from the instructor. If we can get them thinking about what features of a program make it good or bad, we can move on to the more important question: How can we write good programs? If we can get students to think about this, then they can see the "rules" we teach them for what they really are: guidelines, heuristics that point us in the direction of good code. They can learn the rules with an end in mind, and not as an end in themselves.

The value of grounding abstractions in everyday life. When we can ground our classwork in their own experiences, they are better prepared to learn from it. Note that this may well involve undermining their naive ideas about how something works, or turning a conventional wisdom from their first year on its head. The key is to make what they see and do matter to them.

One idea remains fuzzy in my head but won't let me go. While defining the analytic method, Pirsig talks briefly about the difference between analysis by composition and analysis by function. Given that this course is teaches object-oriented programming in Java, there are so many ways in which this distinction could matter: composition and inheritance, instance variables and methods, state and behavior. I'm not sure whether there is anything particular useful in Pirsig's philosophical discussion of this, so I'll think some more about it.

I'm also thinking a bit about a non-Zen idea for the course: Mark Guzdial's method of worked examples and self-explanation. My courses usually include a few worked examples, but Mark has taken the idea to another level. More important, he pairs it with an explicit step in which students explain examples to themselves and others. This draws on results from research in CS education showing that learning and retention are improved when students explain something in their own words. I think this could be especially valuable in a course that asks students to learn a new style of writing code.

One final problem is on my mind right now, a more practical matter: a textbook for the course. When I last taught this course, I used Tim Budd's Understanding Object-Oriented Programming with Java. I have written in the past that I don't like textbooks much, but I always liked this book. I liked the previous multi-language incarnation of the book even more. Unfortunately, one of the purposes of this course is to have students learn Java reasonably well.

Also unfortunate is that Budd's OOP/Java book is now twelve years old. A lot has happened in the Java world in the meantime. Besides, as I found while looking for a compiler textbook last fall, the current asking price of over $120 seems steep -- especially for a CS textbook published in 2000!

So I persist in my quest. I'd love to find something that looks like it is from this century, perhaps even reflecting the impending evolution of the textbook we've all been anticipating. Short of that, I'm looking for a modern treatment of both OO principles and Java.

Of course, I'm a guy who still listens to books on tape, so take my sense of what's modern with a grain of salt.

As always, any pointers and suggestions are appreciated.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 26, 2012 4:23 PM

Adventures in Advising

Student brings me a proposed schedule for next semester.

Me: "Are you happy with this schedule?"

Student: "If I weren't, why would I have made it?"

All I can think is, "Boy, are you gonna have fun as a programmer."


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 20, 2012 4:54 PM

Becoming Engrossed

Moneyball author Michael Lewis gave the graduation address at his alma mater, Princeton, this spring. Unlike so many others, his address is short and to the point. He wants us to remember the role that luck and accident play in our lives, and not to assume that every time live presents us with a gift we deserve it. It's worth a quick read.

The teacher in me was struck by a line about something else. It appears in the background story that describes Lewis's own good fortune. As an undergrad, Lewis wrote his senior thesis under the direction of archaeologist William Childs, about whom he says:

God knows what Professor Childs actually thought of [my thesis], but he helped me to become engrossed.

"He helped me to become engrossed." What a fine compliment to pay a teacher.

It's not easy to help students become engrossed in a project, a topic, or a discipline. It requires skill. I think I'm pretty good at working with students who are already engrossed, but then so are a lot of people, I imagine. These students make it easy for us.

I want to get better at helping students become engrossed, to help light the new fire. I try all the time, and every once in a while I succeed. I'd like to be more reliable at it.

Whatever else goes into this skill, I'm pretty sure that connecting with students and their interests is usually a good first step, and that being curious myself is a good next step. I also think it's good for students to see me being engrossed with a problem is helpful. In fact, being engrossed is almost certainly more important than being engrossing.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 19, 2012 3:04 PM

Basic Arithmetic, APL-Style, and Confident Problem Solvers

After writing last week about a cool array manipulation idiom, motivated by APL, I ran across another reference to "APL style" computation yesterday while catching up with weekend traffic on the Fundamentals of New Computing mailing list. And it was cool, too.

Consider the sort of multi-digit addition problem that we all spend a lot of time practicing as children:

        365
     +  366
     ------

The technique requires converting two-digit sums, such as 6 + 5 = 11 in the rightmost column, into a units digit and carrying the tens digit into the next column to the left. The process is straightforward but creates problems for many students. That's not too surprising, because there is a lot going on in a small space.

David Leibs described a technique, which he says he learned from something Kenneth Iverson wrote, that approaches the task of carrying somewhat differently. It takes advantage of the fact that a multi-digit number is a vector of digits times another vector of powers.

First, we "spread the digits out" and add them, with no concern for overflow:

        3   6   5
     +  3   6   6
     ------------
        6  12  11

Then we normalize the result by shifting carries from right to left, "in fine APL style".

        6  12  11
        6  13   1
        7   3   1

According to Leibs, Iverson believed that this two-step approach was easier for people to get right. I don't know if he had any empirical evidence for the claim, but I can imagine why it might be true. The two-step approach separates into independent operations the tasks of addition and carrying, which are conflated in the conventional approach. Programmers call this separation of concerns, and it makes software easier to get right, too.

Multiplication can be handled in a conceptually similar way. First, we compute an outer product by building a digit-by-digit times table for the digits:

     +---+---------+
     |   |  3  6  6|
     +---+---------+
     | 3 |  9 18 18|
     | 6 | 18 36 36|
     | 5 | 15 30 30|
     +---+---------+

This is straightforward, simply an application of the basic facts that students memorize when they first learn multiplication.

Then we sum the diagonals running southwest to northeast, again with no concern for carrying:

     (9) (18+18) (18+36+15) (36+30) (30)
      9      36         69      66   30

In the traditional column-based approach, we do this implicitly when we add staggered columns of digits, only we have to worry about the carries at the same time -- and now the carry digit may be something other than one!

Finally, we normalize the resulting vector right to left, just as we did for addition:

         9  36  69  66  30
         9  36  69  69   0
         9  36  75   9   0
         9  43   5   9   0
        13   3   5   9   0
     1   3   3   5   9   0

Again, the three components of the solution are separated into independent tasks, enabling the student to focus on one task at a time, using for each a single, relatively straightforward operator.

(Does this approach remind some of you of Cannon's algorithm for matrix multiplication in a two-dimensional mesh architecture?)

Of course, Iverson's APL was designed around vector operations such as these, so it includes operators that make implementing such algorithms as straightforward as the calculate-by-hand technique. Three or four Greek symbols and, voilá, you have a working program. If you are Dave Ungar, you are well on your way to a compiler!

the cover of High-Speed Math Self-Taught, by Lester Meyers

I have a great fondness for alternative ways to do arithmetic. One of the favorite things I ever got from my dad was a worn copy of Lester Meyers's High-Speed Math Self-Taught. I don't know how many hours I spent studying that book, practicing its techniques, and developing my own shortcuts. Many of these techniques have the same feel as the vector-based approaches to addition and multiplication: they seem to involve more steps, but the steps are simpler and easier to get right.

A good example of this I remember learning from High-Speed Math Self-Taught is a shortcut for finding 12.5% of a number: first multiply by 100, then divide by 8. How can a multiplication and a division be faster than a single multiplication? Well, multiplying by 100 is trivial: just add two zeros to the number, or shift the decimal point two places to the right. The division that remains involves a single-digit divisor, which is much easier than multiplying by a three-digit number in the conventional way. The three-digit number even has its own decimal point, which complicates matters further!

To this day, I use shortcuts that Meyers taught me whenever I'm making updating the balance in my checkbook register, calculating a tip in a restaurant, or doing any arithmetic that comes my way. Many people avoid such problems, but I seek them out, because I have fun playing with the numbers.

I am able to have fun in part because I don't have to worry too much about getting a wrong answer. The alternative technique allows me to work not only faster but also more accurately. Being able to work quickly and accurately is a great source of confidence. That's one reason I like the idea of teaching students alternative techniques that separate concerns and thus offer hope for making fewer mistakes. Confident students tend to learn and work faster, and they tend to enjoy learning more than students who are handcuffed by fear.

I don't know if anyone was tried teaching Iverson's APL-style basic arithmetic to children to see if it helps them learn faster or solve problems more accurately. Even if not, it is both a great demonstration of separation of concerns and a solid example of how thinking about a problem differently opens the door to a new kind of solution. That's a useful thing for programmers to learn.

~~~~

Postscript. If anyone has a pointer to a paper or book in which Iverson talks about this approach to arithmetic, I would love to hear from you.

IMAGE: the cover of Meyers's High-Speed Math Self-Taught, 1960. Source: OpenLibrary.org.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

May 30, 2012 4:11 PM

Learning About Classroom Teaching from Teaching On-Line

In his blog entry Recording a Class at Udacity, John Regehr offers some initial impressions on the process. For example, he liked the bird's eye view he had of the course as a whole over the compressed production schedule:

Recording for 8-12 hours a day was intense and left me fried, but on the other hand this has advantages. When spreading course prep across an entire semester, it's sometimes hard to see the big picture and there are often some unfortunate compromises due to work travel and paper deadlines.

But the following lesson stood out to me, due to my own experience learning how to teach:

... it became clear that designing good programming quizzes is one of the keys to turning lecture material into actual learning.

I think this is also true of traditional classroom teaching!

In the classroom, though, there are so many ways for us to fool ourselves. We tell a good story and feel good about it. Students seems to be paying attention, nodding their heads knowingly at what seem to be appropriate moments. That makes us feel good, too. Surely they are learning what we are teaching. Right?

In all honesty, we don't know. But we all feel good about the lecture, so we leave the room thinking learning has taken place.

On-line teaching has the advantage of not providing the same cues. Students may be sitting in their dorm rooms nodding their heads enthusiastically, or not. We may be connecting with everyone, or not. We can't see any of that, so it becomes necessary to punctuate our lecture -- now a sequence of mini-lectures -- and demos with quizzes. And the data speak truth.

The folks at Udacity have figured out that they can improve student learning by integrating listening and doing. Hurray!

Regehr suggests:

Tight integration between listening and hacking is one of the reasons that online learning will -- in some cases -- end up being superior to sitting in class.

I'll suggest something else: We should be doing that in our classrooms, too.

Rather than lecturing for fifty minutes and assuming (or hoping) that students are learning only by listening, we should spend time designing good programming activities and quizzes that lead students through the material and then integrate these tightly into a cycle of listening and doing.

This is an example of how teaching on-line may help some instructors become better classroom teachers. The constraints it imposes on teacher-student interaction cause us to pay closer attention to what is really happening with our students. That's a good thing.

As more courses move on-line, I think that we will all be re-learning and re-discovering many pedagogical patterns, instantiated in a new teaching environment. If that helps us to improve our classroom teaching, too, all the better.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 15, 2012 3:22 PM

What Teachers Make

Last night I attended my daughter's high school orchestra concert. (She plays violin.) Early on I found myself watching the conductor rather than the performers. He was really into the performance, as many conductors are. He's a good teacher and gets pretty good sound out of a few dozen teenagers. Surely he must be proud of their performance, and at least a little proud of his own.

Maybe it's just the end of another academic year, but my next thought was, "This concert will be over in an hour." My mind flashed to a movie from the 1990s, Mr. Holland's Opus. What does the conductor feel like when it's over? Is there a sense of emptiness? What does he think about, knowing that he'll being doing this all again next year, just as he did last year? The faces will change, and maybe the musical selections, but the rest will be eerily familiar.

Then it occurred to me: This is the plight of every teacher. It is mine.

Sometimes I envy people who make things for a living. They create something that people see and use. In the case of software, they may have the opportunity to grow their handiwork, to sustain it. It's tangible. It lasts, at least for a while.

Teachers live in a different world. I think about my own situation, teaching one class a semester, usually in our upper division. Every couple of years, I see a new group of students. I have each of them in class once or twice, maybe even a few times. Then May comes, and they graduate.

To the extent that I create anything, it resides in someone else. In this way, being a teacher less like being a writer or a creator and more like being a gardener. We help prepare others to make and do.

Like gardeners, we plant seeds. Some fall on good soil and flourish. Some fall on rocks and die. Sometimes, you don't even know which is which; you find out only years later. I have been surprised in both ways, more often pleasantly than not.

Sure, we build things, too. We CS profs write software. We university profs build research programs. These are tangible products, and they last for a while.

(We administrators create documents and spreadsheets. Let's not go there.)

But these products are not our primary task, at least not at schools like mine. It is teaching. We help students exercise their minds and grow their skills. If we are lucky, we change them in ways that go beyond our particular disciplines.

There is a different sort of rhythm to being a teacher than to being a maker. You need to be able to delay gratification, while enjoying the connections you make with students and ideas. That's one reason it's not so easy for just anyone to be a teacher, at least not for an entire career. My daughter's orchestra teacher seems to have that rhythm. I have been finding this rhythm over the course of my career, without quite losing the desire also to make things I can touch and use and share.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

May 14, 2012 3:26 PM

Lessons Learned from Another Iteration of the Compiler Course

I am putting the wrap on spring semester, so that I can get down to summer duties and prep for fall teaching. Here are a few lessons I learned this spring.

•  A while back, I wrote briefly about re-learning the value of a small source language for the course. If I want to add a construct or concept, then I need also to subtract a corresponding load from language. In order to include imperative features, I need to eliminate recursive functions, or perhaps eliminate functions altogether. Eliminating recursion may be sufficient, as branching to a function is not much more complex than branching in loops. It is the stack of activation records that seems to slow down most students.

•  Using a free on-line textbook worked out okay. The main problem was that this particular book contained less implementation detail than books we have used in the past, such as Louden, and that hurt the students. We used Louden's TM assembly language and simulator, and the support I gave them for that stage of the compiler in particular was insufficient. The VM and assembly language themselves are simple enough, but students wanted more detailed examples of a code generator than I gave them.

•  I need to teach code generation better. I felt that way as the end of the course approached, and then several students suggested as much in our final session review. This is the most salient lesson I take from this iteration of the course.

I'm not sure at this moment if I need only to do a better job of explaining the process or if I need a different approach to the task more generally. That's something I'll need to think about between now and next time. I do think that I need to show them how to implement function calls in a bit more detail. Perhaps we could spend more time in class with statically-allocated activation records, and then let the students extend those ideas for a run-time stack and recursion.

•  For the first time ever, a few students suggested that I require something simpler than a table-driven parser. Of course, I can address several issues with parsing and code generation by using scaffolding: parser generators, code-generation frameworks and the like. But I still prefer that students write a compiler from scratch, even if only a modest one. There is something powerful in making something from scratch. A table-driven parser is a nice blend of simplicity (in algorithm) and complexity (in data) for learning how compilers really work.

I realize that I have to draw the abstraction line somewhere, and even after several offerings of the course I'm ready to draw it there. To make that work as well as possible, I may have to improve parts of the course to make it work better.

•  Another student suggestion that seems spot-on is that, as we learn each stage of the compiler, we take some time to focus on specific design decisions that the teams will have to make. This will alway them, as they said in their write-ups, "to make informed decisions". I do try to introduce key implementation decisions that they face and offer advice on how to proceed. Clearly I can do better. One way, I think, is to connect more directly with the programming styles they are working in.

~~~~

As usual, the students recognized some of the same shortcomings of the course that I noticed and suggested a couple more that had not occurred to me. I'm always glad I ask for their feedback, both open and anonymous. They are an indispensable source of information about the course.

Writing your first compiler is a big challenge. I can't help but recall something writer Samuel Delany said when asked if he "if it was fun" to write a set of novellas on the order of Eliot's The Waste Land, Pound's The Cantos, and Joyce's Ulysses:

No, not at all. ... But at least now, when somebody asks, "I wonder if Joyce could have done all the things he's supposed to have done in Ulysses," I can answer, "Yes, he could have. I know, because I tried it myself. It's possible."

Whatever other virtues there are in learning to write a compiler, it is valuable for computer science students to take on big challenges and know that it is possible to meet the challenge, because they have tried it themselves.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 08, 2012 3:22 PM

Quality and Quantity, Thoroughbred Edition

I'll Have Another was not highly sought after as a yearling, when he was purchased for the relatively small sum of $11,000.

On Saturday, I'll Have Another rallied down the stretch to win the 2012 Kentucky Derby, passing Bodemeister, one of the race favorites that had led impressively from the gate. Afterward, a television commentator asked the horse's trainer, "What did you and the owner see in the horse way back that made you want to buy it?" The trainer's answer was unusually honest. He said something to the effect,

We buy a lot of horses. Some work out, and some don't. There is a lot of luck involved. You do the right things and see what happens.

This is as a good an example as I've heard in a while of the relationship between quantity and quality, which my memory often connects with stories from the book Art and Fear. People are way too fond of mythologizing successes and then romanticizing the processes that lead to them. In most vocations and most avocations, the best way to succeed is to do the right things, to work hard, be unlucky a lot, and occasionally get lucky.

This mindset does not to diminish the value of hard work and good practices. No, it exalts their value. What it diminishes is our sense of control over outcomes in a complex world. Do your best and you will get better. Just keep in mind that we often have a lot less control over success and failure than our mythology tends to tell us.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 07, 2012 3:21 PM

The University as a Gym for the Mind

In recent years, it is becoming even more common for people to think of students as "paying customers" at the university. People inside of universities, especially the teachers, have long tried to discourage this way of thinking, but it is becoming much harder to make the case. Students and parents are being required to shoulder an ever larger share of the bill for higher education, and with that comes a sense of ownership. Still, educators can't help but worry. The customer isn't always right.

Rob Miles relates a story that might help us make the case:

the gym: a university for the body

You can join a gym to get fit, but just joining doesn't make you fit. It simply gives you access to machinery and expertise that you can use to get fit. If you fail to listen to the trainer or make use of the equipment then you don't get a better body, you just get poorer.

You can buy all the running shoes you like, but if you never lace them up and hit the road, you won't become a runner.

I like this analogy. It also puts into perspective a relatively recent phenomenon, the assertion that software developers may not need a university education. Think about such an assertion in the context of physical fitness:

A lot of people manage to get in shape physically without joining a gym. To do so, all you need is the gumption (1) to learn what they need to do and (2) to develop and stick to a plan. For example, there is a lot of community support among runners, who are willing to help beginners get started. As runners become part of the community, they find opportunities to train in groups, share experiences, and run races together. The result is an informal education as good as most people could get by paying a trainer at a gym.

The internet and the web have provided the technology to support the same sort of informal education in software development. Blogs, user groups, codeathons, and GitHub all offer the novice opportunities to get started, "train" in groups, share experiences, and work together. With some gumption and hard work, a person can become a pretty good developer on his or her own.

But it takes a lot of initiative. Not all people who want to get in shape are ready or able to take control of their own training. A gym serves the useful purpose of getting them started. But each person has to do his or her own hard work.

Likewise, not all learners are ready to manage their own educations and professional development -- especially at age 18, when they come out of a K-12 system that can't always prepare them to be completely independent learners. Like a gym, a university serves the useful purpose of helping such people get started. And just as important, as at the gym, students have to do their own hard work to learn, and to prepare to learn on their own for the rest of their careers.

Of course, other benefits may get lost when students bypass the university. I am still idealistic enough to think that a liberal education, even a liberal arts education, has great value for all people. [ 1 | 2 | 3 ]. We are more than workers in an economic engine. We are human beings with a purpose larger than our earning potentials.

But the economic realities of education these days and the concurrent unbundling of education made possible by technology mean that we will have to deal with issues such as these more and more in the coming years. In any case, perhaps a new analogy might help us help people outside the university understand better the kind of "customer" our students need to be.

(Thanks to Alfred Thompson for the link to Miles's post.)


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 05, 2012 11:53 AM

On the Virtues of a Small Source Language in the Compiler Course

I have not finished grading my students' compilers yet. I haven't even looked at their public comments about the project. (Anonymous feedback comes later in the summer when course assessment data arrives.) Still, one lesson has risen to the surface:

Keep the source language small. No, really.

I long ago learned the importance of assigning a source language small enough to be scanned, parsed, and translated completely in a single semester. Over the years, I had pared the languages I assigned down to the bare essentials. That leaves a small language, one that creates some fun programming challenges. But it's a language that students can master in fifteen weeks.

My students this term were all pretty good programmers, and I am a weak man. So I gave in to the temptation to add just a few of more features to the language, to make it a bit more interesting for my students: variables, an assignment statement, a sequence construct, and a single loop form. It was as if I had learned nothing from all my years teaching this course.

The effect of processing a larger language manifested itself in an expected way: the more students have to do, the more likely that they won't get it all done. This affected a couple of the teams more than the others, but it wasn't so bad. It meant that some teams didn't get as far along with function calls and with recursion than we had hoped. Getting a decent subset of such a language running is still an accomplishment for students.

But the effect of processing a larger language manifested itself in a way I did not expect, too, one more detrimental to student progress: a "hit or miss" quality to the correctness of their implementations. One team had function calls mostly working, but not recursion. Another team had tail recursion mostly working(!), but ordinary function calls didn't work well. One team had local vars working fine but not global variables, while most teams knocked out globals early and, if they struggled at all, it was with locals.

The extra syntactic complexity in the language created a different sort of problems for the teams.

While a single new language feature doesn't seem like too much in isolation, but it interacts with all the existing features and all the other new features to create a much more complex language for the students to understand and for the parser to recognize and get right. Sure, our language had regular tokens and a context-free grammar, which localizes the information the scanner and parser need to see in order to do their jobs. Like all of us, though, students make errors when writing their code. In the more complex space, it is harder to track down the root cause of an error, especially when there are multiple errors present and complicate the search. (Or should I say complect?)

This is an important lesson in language design more generally, especially for a language aimed at beginners. But it also stands out when a compiler for the language is being written by beginning compiler writers.

I am chastened and will return to the True Path of Small Language the next time I teach this course.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 24, 2012 4:55 PM

Recursive Discussions about Recursion

The SIGCSE listserv has erupted today with its seemingly annual discussion of teaching recursion. I wrote about one of the previous discussions a couple of years ago. This year's conversation has actually included a couple of nice ideas, so it was worth following along.

Along the way, one prof commented on an approach he has seen used to introduce students to recursion, often in a data structures course. First you cover factorial, gcd, and the Fibonacci sequence. Then you cover the Towers of Hanoi and binary search. Unfortunately, such an approach is all too common. The poster's wistful analysis:

Of the five problems, only one (binary search) is a problem students might actually want to solve. Only two (Fibonacci and Hanoi) are substantially clearer in recursive than iterative form, and both of them take exponential time. In other words, recursion is a confusing way to solve problems you don't care about, extremely slowly.

Which, frankly, I think is the lesson [some CS profs] want to convey.

And this on a day when I talked with my compiler students about how a compiler can transform many recursive programs into iterative ones, and even eliminate the cost of a non-recursive function call when it is in a tail position.

The quoted passage contains my favorite line of the week thus far: In other words, recursion is a confusing way to solve problems you don't care about, extremely slowly. If that's not the message you want to convey to your students, then please don't introduce them to recursion in this way. If that is the message you want to send to your students, then I am sad for you, but mostly for your students.

I sometimes wonder about the experiences that some computer scientists bring to the classroom. It only takes a little language processing to grok the value of recursion. And a data structures course is a perfectly good place for students to see and do a little language processing. Syntax, abstract or not, is a great computer science example of trees. And students get to learn a little more CS to boot!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 07, 2012 7:08 PM

Teaching the Perfect Class, Again

Some days, I walk out of the classroom thinking, "Well, that didn't go the way I wanted." I'm aware of everything I did wrong, and my mind begins swirling with ideas for next time -- both the next class session and the next time I teach the course. I won't make the same mistakes again, or so I think.

Other days are just the opposite. The stars aligned, and the session seemed to have gone perfectly. My questions provoked discussion. My examples caused every head to nod in appreciation. My jokes brought laughs, not eye rolls.

Ironically, those days are harder to follow. There is a temptation to try to repeat perfection, but that rarely works. Whenever I try, my timing seems off. When my questions, examples, and jokes don't elicit the same responses as the first time, I am confused. I end up trying too hard.

Teachers aren't the only people who face this problem. In this article about the science of music meeting the mind, Yo-Yo Ma describes why there are no immutable laws for expressive performance:

"Every day I'm a slightly different person," Mr. Ma said. "The instrument, which is sensitive to weather and humidity changes, will act differently. There's nothing worse than playing a really a great concert and the next day saying, 'I'm going to do exactly the same thing.' It always falls flat."

Most of the time, it is easier to fix broken things than it is to improve on good ones. Brokenness gives us cues about what to do next. Wholeness doesn't. Trying to repeat perfection in a world that always changes usually leaves us dissatisfied.

So: Treat each performance, each class session, as a chance to create, not maintain. Use ideas that have worked in the past, of course, but use them to create something new, not to try to re-create something that no longer exists.

Fortunately for me, I have far more imperfect days than perfect ones.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 25, 2012 11:18 AM

The Relationship at the Center of the Classroom

Sometimes, people think that the most important relationship in a classroom is the one between the student and the teacher. But it's not. The most important relationship in a classroom is the one between the student and the ideas that make up the course.

The teacher's job isn't to tell the student what to think. It is more the role of a matchmaker: to introduce the student to the ideas and to stir up interest. To stoke the fire and keep it burning when interest wanes. And to throw a log on the fire occasionally so that the young love can grow bigger and stronger.

The center of it all is the relationship between the student and the ideas, the discipline. The teacher's most important lasting effect is in the strength of that relationship.

Students who don't understand this never seem to realize that it is their work and their interest that make learning possible. Ultimately, students make a course successful, or not.

Teachers who don't understand this, or who forget over the course of a career, are easily disillusioned. Sometimes they think their most important job is to relay more knowledge to their students. It's usually more important to provoke the students to engage a few powerful ideas and let them seek out what they need when they need it.

Other times, teachers come almost to depend on their relationship with the students. They feel empty when the connection seems lacking. In most such cases, the best way to solve that problem isn't to work on the teacher-student connection, but to work on the connection between students and ideas. Ironically, the best way to improve that connection is often for teachers to reinvigorate their interest and passion in the ideas they think about and teach.

Occasionally something more happens. Teachers and students become fellow travelers on a journey, and that fellowship can outlast a single course or a four-year program of study. Sometimes, lifelong friendships develop. But that relationship is distinct from what happens between student, teacher, and ideas.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 18, 2012 5:20 PM

Thinking Out Loud about the Compiler in a Pure OO World

John Cook pokes fun at OO practice in his blog post today. The "Obviously a vast improvement." comment deftly ignores the potential source of OOP's benefits, but then that's the key to the joke.

A commenter points to a blog entry by Smalltalk veteran Travis Griggs. I agree with Griggs's recommendation to avoid using verbs-turned-into-nouns as objects, especially lame placeholder words such as "manager" and "loader". As he says, they usually denote objects that fiddle with the private parts of other objects. Those other objects should be providing the services we need.

Griggs allows reasonable linguistic exceptions to the advice. But he also acknowledges the pull of pop culture which, given my teaching assignment this semester, jumped out at me:

There are many 'er' words that despite their focus on what they do, have become so commonplace, that we're best to just stick with them, at least in part. Parser. Compiler. Browser.

I've thought about this break in my own OO discipline before, and now I'm thinking about it again. What would it be like to write compiles without creating parsers and code generators -- and compilers themselves -- as objects?

We could ask a program to compile itself:

     program.compileTo( targetMachine )
But is the program a program, or does it start life as a text file? If the program starts as a text file, perhaps we say
  program.parse()
to create an abstract syntax tree, which we then could ask
  ast.compileTo( targetMachine )

(Instead of sending a parse() message, we might send an asAbstractSyntax() message. There may be no functional difference, but I think the two names indicate subtle differences in mindset.)

When my students write their compilers in Java or another OO language, we discuss in class whether abstract syntax trees ought to be able to generate target code for themselves. The danger lies in binding the AST class to the details of a specific target machine. We can separate the details of the target machine for the AST by passing an argument with the compileTo() message, but what?

Given all the other things we have to learn in the course, my students usually end up following Griggs's advice and doing the traditional thing: pass the AST as an argument to a CodeGenerator object. If we had more time, or a more intensive OO design course prior to the compilers course, we could look at techniques that enable a more OO approach without making things worse in the process.

Looking back farther to the parse behavior, would it ever make sense to send an argument with the parse() message? Perhaps a parse table for an LL(1) or LR(1) algorithm? Or the parsing algorithm itself, as a strategy object? We quickly run the risk of taking steps in the direction that Cook joshes about in his post.

Or perhaps parsing is a natural result of creating a Program object from a piece of text. In that approach, when we say

     Program source = new Program( textfile );
the internal state of source is an abstract syntax tree. This may sound strange at first, but a program isn't really a piece of text. It's just that we are conditioned to think that way by the languages and tools most of us learn first and use most heavily. Smalltalk taught us long ago that this viewpoint is not necessary. (Lisp, too, though in a different way.)

These are just some disconnected thoughts on a Sunday afternoon. There is plenty more to say, and plenty of prior art. I think I'll fire up a Squeak image sometime soon and spend some time reacquainting myself with its take on parsing and code generation, in particular the way it compiles its core out to C.

I like doing this kind of "what if?" thinking. It's fun to explore along the boundary between the land where OOP works naturally and the murky territory where it doesn't seem to fit so well. That's a great place to learn new stuff.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 13, 2012 8:00 PM

The Writer's Mindset for Programmers

Several people have pointed out that these tips on writing from John Steinbeck are useful for programmers; Chris Freeman even mapped them to writing code. I like to make such connections, most recently to the work of Roger Rosenblatt (in several entries, including The Summer Smalltalk Taught Me OOP) and John McPhee, a master of non-fiction (in an entry on writing, teaching, and programming). Lately I have been reading about and from David Foster Wallace, as I wrote a few weeks ago. Several quotes from interviews he gave in the 1990s and 2000s reminded me of programming, both doing it and learning to do it.

The first ties directly into the theme from the entry on my summer of Smalltalk. As Wallace became more adept at making the extensive cuts to his wide-ranging stories suggested by his editor, he adopted a familiar habit:

Eventually, he learned to erase passages that he liked from his hard drive, in order to keep himself from putting them back in.

It's one thing to kill your darlings. It's another altogether to keep them from sneaking back in. In writing as in programming, sometimes rm -r *.* is your friend.

A major theme in Wallace's work -- and life -- was the struggle not to fall into the comfortable patterns of thought engendered by the culture around us. The danger is, those comfortable ruts separate us from what is real:

Most 'familiarity' is mediated and delusive.

Programmers need to keep this in mind when they set out to learn a new programming language and or a new style of programming. We tend to prefer the familiar, whether it is syntax or programming model. Yet familiarity is conditioned by so many things, most prominently recent experience. It deludes us into thinking some things are easier or better than others, often for no other reason than the accident of history that brought us to a particular language or style first. When we look past the experience that gets in the way of seeing the new thing as it is, we enable ourselves to appreciate the new thing as it is, and not as the lens of our experience distorts it.

Of course, that's easier said than done. This struggle consumed Wallace the writer his entire life.

Even so, we don't want to make the mistake of floating along the surface of language and style. Sometimes, we think that makes us free to explore all ideas unencumbered by commitment to any particular syntax, programming model, or development methodology. But it is in focusing our energies and thinking to use specific tools, to write in a specific cadre of languages, and to use a particular styles that we enable ourselves to create, to do something useful:

If I wanted to matter -- even just to myself -- I would have to be less free, by deciding to choose in some kind of definite way.

This line is a climactic revelation of the protagonist in Wallace's posthumously published unfinished novel, The Pale King. It reminds us that freedom is not always so free.

It is much more important for a programmer to be a serial monogamist than a confirmed bachelor. Digging deep into language and style is what makes us stronger, whatever language or style we happen to work in at any point in time. Letting comfortable familiarity mediate our future experiences is simply a way of enslaving ourselves to the past.

In the end, reading Wallace's work and the interviews he gave shows us again that writers and programmers have a lot in common. Even after we throw away all the analogies between our practices, processes, and goals, we are left with an essential identity that we programmers share with our fellow writers:

Writing fiction takes me out of time. That's probably as close to immortal as we'll ever get.

Wallace said this in the first interview he gave after the publication of his first novel. It is a feeling I know well, and one I never want to live without.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 09, 2012 3:33 PM

This and That from Douglas Hofstadter's Visit

Update: In the original, I conflated two quotes in
"Food and Hygiene". I have un-conflated them.

In addition to his lecture on Gödel's incompleteness theorem, Douglas Hofstadter spent a second day on campus, leading a seminar and giving another public talk. I'll blog on those soon. In the meantime, here are a few random stories I heard and impressions I formed over the two days.

The Value of Good Names.   Hofstadter told a story about his "favorite chapter on Galois theory" (don't we all have one?), from a classic book that all the mathematicians in the room recognized. The only thing Hofstadter didn't like about this chapter was that it referred to theorems by number, and he could never remember which theorem was which. That made an otherwise good text harder to follow than it needed to be.

In contrast, he said, was a book by Galyan that gave each theorem a name, a short phrase evocative of what the theorem meant. So much better for reader! So he gave his students one semester an exercise to make his favorite chapter better: they were to give each of the numbered theorems in the chapter an evocative name.

This story made me think of my favorite AI textbook, Patrick Henry Winston's Artificial Intelligence. Winston's book stands out from the other AI books as quirky. He uses his own vocabulary and teaches topics very much in the MIT AI fashion. But he also gives evocative names to many of the big ideas he wants us to learn, among them the representation principle, the principle of least commitment, the diversity principle, and the eponymous "Winston's principle of parallel evolution". My favorite of all is the convergent intelligence principle:

The world manifests constraints and regularities. If a computer is to exhibit intelligence, it must exploit those constraints and regularities, no matter of what the computer happens to be made.

To me, that is AI.

Food and Hygiene.   The propensity of mathematicians to make their work harder for other people to understand, even other mathematicians, reminded Doug Shaw of two passages, from famed mathematicians Gian-Carlo Rota and André Weil. Rota said that we must guard ... against confusing the presentation of mathematics with the content of mathematics. More colorfully, Weil cautioned [If] logic is the hygiene of the mathematician, it is not his source of food. Theorems, proofs, and Greek symbols are mathematical hygiene. Pictures, problems, and understanding are food.

A Good Gig, If You Can Get It.   Hofstadter holds a university-level appointment at Indiana, and his research on human thought and the fluidity of concepts is wide enough to include everything under the sun. Last semester, he taught a course on The Catcher in the Rye. He and his students read the book aloud and discussed what makes it great. Very cool.

If You Need a Research Project...   At some time in the past, Hofstadter read, in a book or article about translating natural language into formal logic, that 'but' is simply a trivial alternative to 'and' and so can be represented as such. "Nonsense", he said! 'but' embodies all the complexity of human thought. "If we could write a program that could use 'but' correctly, we would have accomplished something impressive."

Dissatisfied.   Hofstadter uses that word a lot in conversation, or words like it, such as 'unsatisfying'. He does not express the sentiment in a whiny way. He says it in a curious way. His tone always indicates a desire to understand something better, to go deeper to the core of the question. That's a sign of a good researcher and a deep thinker.

~~~~

Let's just say that this was a great treat. Thanks to Dr. Hofstadter for sharing so much time with us here.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

February 19, 2012 12:17 PM

The Polymorphism Challenge

Back at SIGCSE 2005, Joe Bergin and ran a workshop called The Polymorphism Challenge that I mentioned at the time but never elaborated on. It's been on my mind again for the last week. First I saw a link to an OOP challenge aimed at helping programmers move toward OO's ideal of small classes and short methods. Then Kent Beck tweeted about the Anti-IF Campaign, which, as its name suggests, wants to help programmers "avoid dangerous ifs and use objects to build a code that is flexible, changeable, and easily testable".

That was the goal of The Polymorphism Challenge. I decided it was time to write up the challenge and make our workshop materials available to everyone.

Background

Beginning in the mid-1990s, Joe and I have been part of a cabal of CS educators trying to teach object-oriented programming style better. We saw dynamic polymorphism as one of the key advantages to be found in OOP. Getting procedural programmers to embrace it, including many university instructors, was a big challenge.

At ChiliPLoP 2003, our group was riffing on the idea of extreme refactoring, during which Joe and I created a couple of contrived examples eliminating if statements from a specific part of Karel the Robot that seemed to require them.

This led Joe to propose a programming exercise he called an étude, similar to what these days are called katas, which I summarized in Practice for Practice's Sake:

Write a particular program with a budget of n if-statements or fewer, for some small value of n. Forcing oneself to not use an if statement wherever it feels comfortable forces the programmer to confront how choices can be made at run-time, and how polymorphism in the program can do the job. The goal isn't necessarily to create an application to keep and use. Indeed, if n is small enough and the task challenging enough, the resulting program may well be stilted beyond all maintainability. But in writing it the programmer may learn something about polymorphism and when it should be used.

Motivated by the Three Bears pattern, Joe and I went a step further. Perhaps the best way to know that you don't need if-statements everywhere is not to use them anywhere. Turn the dials to 11 and make 'em all go away! Thus was born the challenge, as a workshop for CS educators at SIGCSE 2005. We think it is useful for all programmers. Below are the materials we used to run the workshop, with only light editing.

Task

Working in pairs, you will write (or re-write) simple but complete programs that normally use if statements, to completely remove all selection structures in favor of polymorphism.

Objective

The purpose of this exercise is not to demonstrate that if statements are bad, but that they aren't necessary. Once you can program effectively this way, you have a better perspective from which to choose the right tool. It is directed at skilled procedural programmers, not at novices.

Rules

You should attempt to build the solutions to one of the challenge problems without using if statements or the equivalent.

You may use the libraries arbitrarily, even when you are pretty sure that they are implemented with if statements.

You may use exceptions only when really needed and not as a substitute for if statements. Similarly, while loops are not to be used to simulate if statements. Your problems should be solved with polymorphism: dynamic and parameterized.

Note that if you use (for example) a hash map and the program cannot control what is used as a key in a get (user input, for example). then it might return null. You are allowed to use an exception to handle this situation, or even an if. If you can't get all if statements out of your code, then see how few you really need to live with.

Challenges

Participants worked in pairs. They had a choice of programming scenarios, some of which were adapted from work by others:

This pdf file contains the full descriptions given to participants, including some we did not try with workshop participants. If you come up with a good scenario for this challenge, or variations on ours, please let me know.

Hints

When participants hit a block and asked for pointers, we offered hints of various kinds, such as:

•  When you have two behaviors, put them into different objects. The objects can be created from the same class or from related classes. If they are from the same class, you may want to use parameterized polymorphism. When the classes are different, you can use dynamic polymorphism. This is the easy step. Java interfaces are your friend here.

•  When you have the behaviors in different objects, find a way to bring the right object into play at the right time. That way, you don't need to use ad hoc methods to distinguish among them. This is the hard part. Sometimes you can develop a state change diagram that makes it easier. Then you can replace one object with another at the well-defined state change points.

•  Note that you can eliminate a lot of if statements by capturing early decisions -- perhaps made using if statements -- in different objects. These objects can then act as "flags with behavior" when they are passed around the program. The flag object then encapsulates the earlier decision. Now try to capture those decisions without if statements.

(Note that this technique alone is a big win in improving the maintainability of code. You replace many if statements spread throughout a program with one, giving you a single point of change in future.)

•  Delegation from one object to another is a real help in this exercise. This leads you to the Strategy design pattern. An object M can carry with it another, S, that encapsulates the strategy M has for solving a problem. To perform the associated behavior, M delegates to S. By changing the strategy object, you can change the behavior of the object that carries it. M seems to behave polymorphically, but it is really S that does the work.

•  You can modify or enhance strategies using the Decorator design pattern. A decorator D implements the same interface as the thing it decorates, M. When sent a message, the decorator performs some action and also sends the same message to the object it decorates. Thus the behavior of M is executed, but also more. Note that D can provide additional functionality both before and after sending the message to M. A functional method can return quite a different result when sent through a decorated object.

•  You can often choose strategies or other objects that encapsulate decisions using the Factory design pattern. A hash map can be a simple factory.

•  You can sometimes use max and min to map a range of values onto a smaller set and then use an index into a collection to choose an object. max and min are library functions so we don't care here how they might be implemented.

At the end of the workshop, we gave one piece of general advice: Doing this exercise once is not enough. Like an étude, it can be practiced often, in order to develop and internalize the necessary skills and thought patterns.

Conclusion

I'll not give any answers or examples here, so that readers can take the challenge themselves and try their hand at writing code. In future posts, I'll write up examples of the techniques described in the hints, and perhaps a few others.

Joe wrote up his étude in Variations on a Polymorphic Theme. In it, he gives some advice and a short example.

Serge Demeyer, Stéphane Ducasse, and Oscar Nierstrasz wrote a wonderful paper that places if statements in the larger context of an OO system, Transform Conditionals: a Reengineering Pattern Language.

If you like the Polymorphism Challenge, you might want to try some other challenges that ask you to do without features of your preferred language or programming style that you consider essential. Check out these Programming Challenges.

Remember, constraints help us grow. Constraints are where creative solutions happen.

I'll close with the same advice we gave at the end of the workshop: Doing this exercise once is not enough, especially for OO novices. Practice it often, like an étude or kata. Repetition can help you develop the thought patterns of an OO programmer, internalize them, and build the skills you need to write good OO programs.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 13, 2012 4:13 PM

The Patterns, They Are A-Changin'

Seth Godin's recent blog entry at the Domino Project talks about how changes in the book are changing the publishing industry. He doesn't use the word 'pattern' in his discussion, in the sense of an Alexandrian pattern, but that's how I see his discussion. The forces at play in our world are changing, which leads to changes in the forms that find equilibrium. In particular, Godin mentions:

  • Patterns of length. There are tradeoffs involving the cost of binding and the minimum viable selling price for publishers versus the technology of binding and a maximum viable purchase price for consumers. These have favored certain sizes in print.
  • Patterns of self-sufficiency. "Electronic forms link." Print forms must stand on their own.
  • Patterns of consumption. These are are driven even more economic forces than the other two types, not technical forces. Consuming e-books is, he says, "more like browsing than buying."

Godin looks mostly at the forward implications of changes in the patterns of self-sufficiency, but I've been thinking about the backward implications of print publications having to stand on their own. As noted in a recent entry, I have begun to adapt a couple of my blog entries into articles for print media, such as newspaper and magazine opinion pieces. My blog entries link generously and regularly to my earlier writings, because much of what I write is part of an ongoing process of thinking out loud. I also link wherever I can to other peoples' works, whether blogs, journal articles, code, or other forms. That works reasonably well in a blog, because readers can see and following the context in which the current piece is written. It also means that I don't have to re-explain every idea that a given entry deals with; if it's been handled well in a previous entry, I link to it.

As I try to adapt individual blog entries, I find that they are missing so much context when we strip the links out. In some places, I can replace the link with a few sentences of summary. But how much should I explain? It's easy to find myself turning one- or two-page blog entry into four pages, or ten. The result is that the process of "converting an entry into an article" may become more like writing a new piece than like modifying an existing piece. That's okay, of course, but it's a different task and requires a different mindset.

For someone steeped in Alexander's patterns and the software patterns community, this sentence by Godin signals a shift in the writing and publishing patterns we are all used to:

As soon as paper goes away, so do the chokepoints that created scarcity.

Now, the force of abundance begins to dominate scarcity, and the forces of bits and links begin to dominate paper and bindings and the bookshelves of the neighborhood store.

It turns out that the world has for the last hundreds of years been operating within a small portion of the pattern language of writing and publishing books. As technology and people change, the equilibrium points in the publishing space have changed... and so we need to adopt a new set of patterns elsewhere in the pattern language. At first blush, this seems like unexplored territory, but in fact it is more. This part of the pattern language is, for the most part, unwritten. We have to discover these patterns for ourselves.

Thus, we have the Domino Project and others like it.

The same shifting of patterns is happening in my line of work, too. A lot of folks are beginning to explore the undiscovered country of university education in the age of the internet. This is an even greater challenge, I think, because people and how they learn are even more dominant factors in the education pattern language than in the publishing game.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

February 03, 2012 4:46 PM

Tactical Decisions That Affect Students

My recent post about teaching with authenticity and authority is about attitude. Some teaching behaviors happen in the moment. They are tactical decisions in response to specific events, but they, too, can have long-term effects on how our students see us and on how well a course proceeds.

A conversation with a colleague this week reminded me of a situation I've encountered several times over the years with faculty from across my university. Many have a lot of experience teaching yet seem to make a particular bad tactical decision.

Our university has a policy -- approved by the faculty -- that students are not to be penalized for missing class to participate in university-sanctioned events. These often involve representing the school in athletic and other extracurricular activities such as debate, but sometimes they relate important on-campus functions involving scholarships and the like.

Many faculty have attendance policies in their courses. For instance, they may take roll each day, and if a student misses more than four class meetings, then the student's grade in the course is reduced. These faculty genuinely believe that class participation is essential and want to encourage students to take attendance seriously. Because they have a large allowance for absences, many do not distinguish between "unexcused" absences and "excused" ones. That simplifies bookkeeping, and in the end it almost never matters to the student's final grade.

Still some students worry that they'll end up with unexpected reasons to miss the allotment of "free" days. When they have a university-sanctioned activity, they want the professor not to hold it against them. Don't worry, the prof tells them, it's unlikely to affect your grade, and if it does, I'll reconsider. Still the students worry, and point to the policy that they should not be penalized for the absence.

When I have talked to faculty in these situations, often in my roll as a faculty rep on my university's athletics advisory council but sometimes as department head, I am surprised by the faculty's responses when they learn of the faculty-approved policy.

"My attendance sheet is a record of fact. I cannot mark students present if they are absent."

Couldn't you treat the sheet as a record of unexcused absences?

"No. It also indicates that students have participated in class that day, and at what level. If they are absent, then they have not participated."

I am careful in these conversations to respect their autonomy and their control over the classroom, and the conversations go quite well. They understand the student's concern. They readily acknowledge that the absence is unlikely to have an effect on the student's grade and assure me that they have assured the student as much. They just don't want to mess with their systems. And they are baffled by the student's continued unhappiness with the result.

This baffles me. If attendance or absence on that one day is unlikely to have an effect on the student's grade, what is the advantage of falling on one's sword over such a small issue? And, metaphorically, falling on one's sword is what happens. The student is disenchanted at best, and unhappy enough to grieve the action at worst.

I have come to see such situations in this way: "unlikely to have an effect on the student's grade" creates an element of uncertainty. As teachers, we can bear the uncertainty, or we can leave our students to bear it. There may well be good reasons to leave uncertainty in our students' minds. We should ask ourselves, "Is this one of them?" If not, then we should bear it. Tell the student clearly, "This will not affect your grade", and make whatever change in our system of recording or grading needed to make it so.

The result of this approach is, most likely, a student with a positive outlook about us and about our course. The lack of fear or uncertainty frees them to learn better. The student may even think that the prof cares more about the student's learning than about his or her system.

Of course, all bets are off for repeated offenders, or for students who exhibit patterns of disengagement from the course. Students have to be invested in their own learning, too.

This scenario is but one of many in which instructors are required to make tactical decisions. The effects of bad decisions can accumulate, in a single student and in the class more generally. They then have negative effects on student attitude, on the atmosphere in the classroom, and on teaching and learning. The good professors I know seem to reliably make good tactical decisions in the moment, and when they err they are willing and able to make amends.

The decisions one makes in such situations is a direct result of one's general attitude, but I think that we can all learn to make better decisions. It is a skill that teachers can learn in the process of becoming better teachers.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 01, 2012 5:00 PM

"You Cannot Trust Your Creativity Yet"

You've got to learn your instrument.
Then, you practice, practice, practice.
And then, when you finally get up there on the bandstand,
forget all that and just wail. -- Charlie Parker

I signed up for an opportunity to read early releases of a book in progress, Bootstrapping Design. Chapter 4 contain a short passage that applies to beginning programmers, too:

Getting good at design means cultivating your taste. Right now, you don't have it. Eventually you will, but until then you cannot trust your creativity. Instead, focus on simplicity, clarity, and the cold, hard science of what works.

M.C. Escher, 'Hands'

This is hard advice for people to follow. We like to use our brains, to create, to try out what we know. I see this desire in many beginning programming students. The danger grows as our skills grow. One of my greatest frustrations comes in my Programming Languages course. Many students in the course are reluctant to use straightforward design patterns such as mutual recursion.

At one level, I understand their mindset. They have started to become accomplished programmers in other languages, and they want to think, design, and program for themselves. Oftentimes, their ill-formed approaches work okay in the small, even if the code makes the prof cringe. As our programs grow, though, the disorder snowballs. Pretty soon, the code is out of the student's control. The prof's, too.

A good example of this phenomenon, in both its positive and negative forms, happened toward the end of last semester's course. A series of homework assignments had the students growing an interpreter for a small, Scheme-like language. It eventually became the largest functional program they had ever written. In the end. there was a big difference between code written by students who relied on "the cold, hard science" we covered in class and code written by students who had wondered off into the wilderness of their own creativity. Filled with common patterns, the former was relatively easy to read, search, and grade. The latter... not so much. Even some very strong students began to struggle with their own code. They had relied too much on their own approaches for decomposing the problem and organizing their programs, but those ideas weren't scaling well.

I think what happens is that, over time, small errors, missteps, and complexities accumulate. It's almost like the effect of rounding error when working with floating point numbers. I vividly remember experiencing that in mu undergrad Numerical Analysis courses. Sadly, few of our CS students these days take Numerical Analysis, so their understanding of the danger is mostly theoretical.

Perhaps the most interesting embodiment of trusting one's own creativity too much occurred on the final assignment of the term. After several weeks and several assignments, we had a decent sized program. Before assigning the last set of requirements, I gave everyone in the class a working solution that I had written, for reference. One student was having so much trouble getting his own program to work correctly, even with reference to my code, that he decided to use my code as the basis for his assignment.

Imagine my surprise when I saw his submission. He used my code, but he did not follow the example. The code he added to handle the new requirements didn't look anything like mine, or like what we had studied in class. It repeated many of the choices that had gotten him into hot water over the course of the earlier assignments. I could not help but chuckle. At least he is persistent.

It can be hard to trust new ideas, especially when we don't understand them fully yet. I know that. I do the same thing sometimes. We feel constrained by someone else's programming patterns and want to find our own way. But those patterns aren't just constraints; they are also a source of freedom. I try to let my students grow in freedom as they progress through the curriculum, but sometimes we encounter something new like functional programming and have to step back into the role of uncultivated programmer and grow again.

There is great value in learning the rules first and letting our tastes and design skill evolve slowly. Seniors taking project courses are ready, so we turn them loose to apply their own taste and creativity on Big Problems. Freshmen usually are not yet able to trust their own creativity. They need to take it slow.

To "think outside the box", you you have to start with a box. That is true of taste and creativity as much as it is of knowledge and skill.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 31, 2012 4:01 PM

Teaching with Authenticity and Authority

What is this?
A new teaching with authority.
-- Mark 1:27

Over the last few weeks, we've been processing student assessments from fall semester. Reading student comments about my course and other profs' courses has me thinking about the different ways in which students "see" their instructors. Two profs can be equally knowledgable in an area yet give off very different vibes to their class. The vibe has a lot to do with how students interpret the instructor's behavior. It also affects student motivation and, ultimately, student learning.

Daniel Lemire recently offered two rules for teaching in the 21st century, one of which was to be an authentic role model. If students know that "someone ordinary" like a professor was able to master the course material, then they will have reason to believe that they can do the same. Authenticity is invaluable if we hope to model the mindset of a learner for our students.

It is also a huge factor in the classroom in another way as well. Students are also sensitive to whether we are authentic users of knowledge. If I am teaching agile approaches to software development but students perceive that I am not an agile developer when writing my own code outside the course, then they are less likely to take the agile approaches seriously. If I am teaching the use of some theoretical technique for solving a problem, say, nondeterministic finite state machines, but my students perceive that I do something else when I'm not teaching the course, then their motivation to master the technique wanes.

I think that how students "see" their instructors is affected by something as important as being authentic: being authoritative. I don't mean authority in the sense of power, in particular power granted from on high. In my experience, that sort of authority is surprisingly ineffective as a tool for getting students to follow me into a difficult forest of knowledge. They respect the authority, perhaps, but they tend to balk at orders to march into the dark.

I also don't mean authority born out of perfection. If that were the case, I could never step into a classroom. Just today, I made an error in my compilers course while demonstrating a technique for converting nondeterministic finite automata into deterministic automata. Such errors occur frequently enough to disabuse both me and my students of the thought that mastery means always getting the right answers. (But not so often, I hope, as to undermine students' confidence in my mastery of the material.)

Instead, I am thinking of the persuasive or influential sense of authority that comes out of mastery itself. This sense of the word is more faithful to the origin of the word in the Latin auctor, also the source of "author". An author is someone who originates or gives existence to something. Among the earliest uses of the "auctor" or "author" was to refer renowned scholars in medieval universities. Such scholars created new knowledge, or at least discovered it and incorporated it into the knowledge of the community. They did not simply mimic back what was already understood.

I suspect that this is what the crowds thought in the Bible passage from Mark given above. The scribes of the time could read scripture and recite it back to people. They could even teach by elaborating on it or relating it to the lives of the people. But the essence of their teaching was already present in the scripture. Along came Jesus, who taught the scripture, but more. His elaborations and parables brought new depth to the old laws and stories. They taught something new. He was speaking with authority.

The sense of authorship or origination is key. We see this sense of authority when we speak of authors. In modern times, the word "auctor" is sometimes used to denote the person who donates the genetic material that serves as the basis for a clone. This usage draws even more heavily on the idea of creation residing in the root word.

That's all fine as philosophy and etymology, but I think this matters in the classroom, too. The authoritative teacher does not simply "read the book" and then lecture, which at a certain level is simply reading the book to the students. Good teachers understand the material at a deeper level. They use it. They apply the techniques and makes connections to other topics, even other course and disciplines. They create new ideas with the course material. They don't simply relay information to students; they also create knowledge.

Such teachers are able to teach with authority.

One of the big challenges for new instructors, especially new ones, is to learn how to teach with this kind of authority. Some assume that their status as professors grants them the sort of authority founded in power, and that is what they project to their students. They know their stuff, yet they don't speak with the sort of authority that resonates with their students. When the students respond with less than capitulation, the professor reacts with power or defensiveness. These only make the problem worse.

It takes experience in the classroom for most of us to figure this out. I know it took me a while. It also takes an openness to change and a willingness to learn. Fortunately, these are traits possessed by many people who want to teach.

When things are going wrong in my classes these days, I try to step back from the situation. I ask myself, Am I being authentic? Am I speaking with an authority born out of scholarship, rather than an authority born out of status? Am I letting my students see these things?


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 23, 2012 4:06 PM

Learning To Do Things, and the Liberal Arts

Last summer, I wrote an entry, Failure and the Liberal Arts, which suggested (among things) that the value of a liberal education lies in the intersections between disciplines and ideas. This contrasts with the view seemingly held by many people that a liberal education means to study the "liberal arts", mostly the humanities and arts.

The traditional liberal arts education faces a lot of challenges in the current cultural and economic climate. As colleges and universities cope with rising costs, increasing competition for students, and an unbundling of the university's products engendered by technology, undergraduate degrees filled only with history, literature, and upper-class culture -- and no obvious preparation for economic life after graduation -- creates another barrier to making a sale to prospective parents and students.

The goals of a liberal education are laudable. How can we craft undergraduate experiences that help students to develop broad skills in reading, writing, and thinking? Isn't that what the traditional liberal arts do best?

Timothy Burke nails the answer to these questions in this comment on his own blog:

... a person becomes most skilled at creating, making, innovating, thinking, in ways that have value in existing professions *and* in life through indirect means. Studying communication doesn't make you a better communicator, studying entrepreneurship doesn't make you a better entrepreneur, and so on.

But this has huge, huge implications as a perspective for the content and practice of "liberal arts" as an educational object. ... Maybe learning to do things (creating, innovating, expressing, etc.) isn't advanced by anything resembling intensive study of a fixed body of knowledge, but by doing.

I couldn't agree more. This idea lies at the heart of my blog, in both theme and name: knowing and doing. Learning to do things is best accomplished by doing things, not (just) by studying a fixed body of knowledge, however intensive the studying or valuable the body of knowledge. Sure, at some point, you gotta know stuff, and intensive study of a domain is a useful and valuable enterprise. But we learn best by doing.

My compilers students will -- if all goes as we plan and hope -- learn much more this semester than how to write a compiler. They will learn things that I can't teach them directly with a lecture. And even when I could give them the lecture, they would not learn it in the same way as when they learn it on the ground, in the trenches, one shovel of dirt at a time.

Liberal arts colleges -- and universities with liberal arts cores and general education programs -- would do well to take this lesson to heart sooner rather than later.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 19, 2012 4:23 PM

An Adventure in Knowing Too Much ... Or Thinking Too Little

To be honest, it is a little of both.

I gave the students in my compilers course a small homework assignment. It's a relatively simple translation problem whose primary goals are to help students refresh their programming skills in their language of choice and to think about the issues we will be studying in depth in coming weeks: scanning, parsing, validating, and generating.

I sat down the other day to write a solution ... and promptly made a mess.

In retrospect, my problem was that I was somewhere in between "do it 'simple'" and "do it 'right'". Unlike most of the students in the class, already know a lot about building compilers. I could use all that knowledge and build a multi-stage processor that converts a string in the source language (a simple template language) into a string in the target language (ASCII text). But writing a scanner, a parser, a static analyzer, and a text generator seems like overkill for such a simple problem. Besides, my students aren't likely to write such a solution, which would make my experience less valuable helping them to solve the problem, and my program less valuable as an example of a reasonable solution.

So I decided to keep things simple. Unfortunately, though, I didn't follow my own agile advice and do the simplest thing that could possibly work. As if with the full-compiler option, I don't really want the simplest program that could possibly work. This problem is simple enough to solve with a single pass algorithm, processing the input stream at the level of individual characters. That approach would work but would obscure the issues we are exploring in the course in a lot of low-level code managing states and conditions. Our goal for the assignment is understanding, not efficiency or language hackery.

I was frustrated with myself, so I walked away.

Later in the day, I was diddling around the house and occasionally mulling over my situation. Suddenly I saw a solution in mind. It embodied a simple understanding of my the problem, in the middle ground between too simple and too complex that was just right.

I had written my original code in a test-first way, but that didn't help me avoid my mess. I know that pair programming would have. My partner would surely have seen through the complexity I was spewing to the fact that I was off track and said, "Huh? Cut that out." Pair programming is an unsung hero in cases like this.

I wonder if this pitfall is a particular risk for CS academics. We teach courses that are full of details, with the goal of helping students understand the full depth of a domain. We often write quick and dirty code for our own purposes. These are at opposite ends of the software development spectrum. In the end, we have to help students learn to think somewhere in the middle. So, we try to show students well-designed solutions that are simple enough, but no simpler. That's a much more challenging task that writing a program at either extreme. Not being full-time developers, perhaps our instincts for finding the happy medium aren't as sharp as they might be.

As always, though, I had fun writing code.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 10, 2012 4:05 PM

Looking Forward: Preparing for Compilers

Spring semester is underway. My compilers course met for the first time today. After all these years, I still get excited at the prospect of writing a compiler. On top of that, we get to talk about programming languages and programming all semester.

I've been preparing for the course since last semester, during the programming languages course I debriefed recently. I've written blog entries as I planned previous offerings of the compiler course, on topics such as short iterations and teaching by example, fifteen compilers in fifteen weeks and teaching the course backwards. I haven't written anything yet this time for one of the same reasons I haven't been writing about my knee rehab: I haven't had much to say. Actually, I have two small things.

First, on textbooks. I found that the textbook I've used for the last few offering of the course now costs students over $140, even at Amazon. That's no $274.70, but sheesh. I looked at several other popular undergrad compiler texts and found them all to be well over $100. The books my students might want to keep for their professional careers are not suitable for an undergrad course, and the ones that are suitable are expensive. I understand the reasons why yet can't stomach hitting my students with such a large bill. The Dragon book is the standard, of course, but I'm not convinced it's a good book for my audience -- too few examples, and so much material. (At least it's relatively inexpensive, at closer to $105.)

I found a few compiler textbooks available free on-line, including that $275 book I like. Ultimately I settled on Torben Mogensen's Basics of Compiler Design. It covers the basic material without too much fluff, though it lacks a running example with full implementation. I'll augment it with my own material and web readings. The price is certainly attractive. I'll let you know how it works out.

Second, as I was filing a pile of papers over break, I ran across the student assessments from last offering of the course. Perfect timing! I re-read them and am able to take into account student feedback. The last group was pretty pleased with the course and offered two broad suggestions for improvement: more low-level details and more code examples. I concur in both. It's easy when covering so many new ideas to stay at an abstract level, and the compiler course is no exception. Code examples help students connect the ideas we discuss with the reality of their own projects.

These are time consuming improvements to make, and time will be at a premium with a new textbook for the course. This new text makes them even more important, though, because it has few code examples. My goal is to add one new code example to each week of the course. I'll be happy if I manage one really good example every other week.

And we are off.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 04, 2012 3:26 PM

Looking Backward: Closing the Book on Fall Semester

At Week 2 of my fall programming languages course, I planned to write an entry called More Concrete Ideas for the course, following up on an entry about extravagant ideas. I even had a geeky Scheme joke to open, asking whether that was ((More Concrete) Ideas) or (More (Concrete Ideas)).

Suddenly, or so it seemed, we were two-thirds of the way through the course, and I still hadn't written. Now we are three weeks past the end of the course. Time to debrief. (At least I'm not waiting to debrief the course until the next time we offer it, in Spring 2013!)

Due to a recent change in my department's prerequisite structure, I had a more diverse student population this semester than I have ever had in Programming Languages. Their diversity covered multiple dimensions, including the languages they knew well and the level of experience they had writing larger programs. Many did not know OOP or Java yet, which have long been staples of my reference set.

This led to an interesting phenomenon. One particular student was at the low end of background, experience, and skill set. In most ways, he was more of a rank beginner than the usual student in the course. Over the course of the semester, he asked me questions unlike any I had been asked in this course for a long while, if ever. His questions caused me to step back and focus on basic elements of the craft of programming that I have never discussed in the context of this course and functional programming. At times it was frustrating, but most of the time it was a good challenge.

More importantly, his questions exposed for me the sort of difficulties that I often miss when students are doing better. Even students who are doing well in the course develop subtle misconceptions about the material, or start with misconceptions that my teaching does address. However, their decent scores on quizzes and their general understanding of the material allow me to gloss over misconceptions that come back to trouble us later.

This is the frame of mind I was in when I wrote an entry on open dialogue a few weeks back. Even with my beginner-like student, I did not enough questions soon enough. For all I know, students in previous semesters have had similar problems that I never addressed in class. After this round, I am thinking more concretely about where I have gone wrong and how I might do better in the future.

As mentioned earlier, I offered students a chance to document some of their epic failures during the semester. Of course, students always have that opportunity; this time, I offered extra credit as an inducement to take on the challenge.

In the end, roughly a third of the students submitted essays, though all came in very late in the semester, enough so that anything I might fold back into the course will have to wait until the next time around.

I enjoyed several of the essays. One student learned one valuable lesson this semester: "There are only so many hours in a day." Another student wrote,

My usual strategy with all but the most basic of the problems is to stare at them for a while and then walk away.

(A former student wondered, "But does he ever come back?")

Like many programmers, this student lets his brain work on a problem over time and eventually arrives at a solution without the stress of staring at the problem until he gets it right the first time. However, he found this semester that even this approach cannot alleviate all stress, because sometimes no answer seems to work. What then?

I'll try the Epic Fail idea again in coming semesters. I'll probably give students a bit more guidance about the kind of failures I mean (not run-of-the-mill stuff, really) and the kind of write-up that I'm looking for. As always, students appreciate examples!

In that vein, I had thought this semester that I would provide students with occasional example programs (1) to illustrate functional programming and Scheme outside the realm of the study of programming languages and (2) to illustrate programming language topics with non-FP code. The idea was to broaden the reach of the course, to help students struggling with FP and to open doors for those students who were really interested and ready to see more.

Unfortunately, I did not do nearly enough of this. Next time, I will find a way, somehow. Among the topics that can benefit most from this idea are continuation-based web servers and MapReduce.

As I graded the final exam, I was most disappointed with student understanding of curried functions and of the equivalence between local variables and function application. Currying is such a beautiful idea at the heart of functional style, which we saw repeatedly throughout the semester. The local variable/function app idea is one we discussed in some detail at least three times during the semester, in different contexts, and which I highlighted as "something that will be on the final".

I'll think long and hard before I teach the course again about ways to help students understand these ideas better. If you have any suggestions, please let me know!

One student tweeted after the course:

I think the larger examples are what helped ideas stick in my mind. This class had the most information I've learned in a class before. It was a challenge which was really refreshing. Gliding through classes put me into a lull for awhile.

It's important for me and other teachers to remember just how heavy some courses can be. I'm fully aware that this course is different than any students have taken before and that it will challenge them in new ways. I tell them so on Day 1. Still, it's good for me to remember just what this means for students on a day-to-day basis, and to find ways to help them face the challenge while they are in the trenches.

All in all, this was a good group of students. They were game for a challenging, deep course and gave me a lot of their time and energy. I can't ask for much more than that. The good news is that they made decent progress and should be better prepared for some of the challenges they'll face in their careers. And a good number of them are going on to the compilers course. Great fun will be had by all.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 19, 2011 4:49 PM

"I Love The Stuff You Never See"

I occasionally read and hear people give advice about how to find a career, vocation, or avocation that someone will enjoy and succeed in. There is a lot of talk about passion, which is understandable. Surely, we will enjoy things we are passionate about, and perhaps then we want to put in the hours required to succeed. Still, "finding your passion" seems a little abstract, especially for someone who is struggling to find one.

This weekend, I read A Man, A Ball, A Hoop, A Bench (and an Alleged Thread)... Teller!. It's a story about the magician Teller, one half of the wonderful team Penn & Teller, and his years-long pursuit of a particular illusion. While discussing his work habits, Teller said something deceptively simple:

I love the stuff you never see.

I knew immediately just what he meant.

I can say this about teaching. I love the hours spent creating examples, writing sample code, improving it, writing and rewriting lecture notes, and creating and solving homework assignments. When a course doesn't go as I had planned, I like figuring out why and trying to fix it. Students see the finished product, not the hours spent creating it. I enjoy both.

I don't necessarily enjoy all of the behind-the-scenes work. I don't really enjoy grading. But my enjoyment of the preparation and my enjoyment of the class itself -- the teaching equivalent of "the performance" -- carries me through.

I can also say the same thing about programming. I love to fiddle with source code, organizing and rewriting it until it's all just so. I love to factor out repetition and discover abstractions. I enjoy tweaking interfaces, both the interfaces inside my code and the interfaces my code's users see. I love that sudden moment of pleasure when a program runs for the first time. Users see the finished product, not the hours spent creating it. I enjoy both.

Again, I don't necessarily enjoy everything that I have to do the behind the scenes. I don't enjoy twiddling with configuration files, especially at the interface to the OS. Unlike many of my friends, I don't always enjoy installing and uninstalling, all the libraries I need to make everything work in the current version of the OS and interpreter. But that time seems small compared the time I spend living inside the code, and that carries me through.

In many ways, I think that Teller's simple declaration is a much better predictor of what you will enjoy in a career or avocation than other, fancier advice you'll receive. If you love the stuff other folks never see, you are probably doing the right thing for you.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

December 15, 2011 4:08 PM

Learning More Than What Is Helpful Right Now

Stanley Fish wrote this week about the end of a course he taught this semester, on "law, liberalism and religion". In this course, his students a number of essays and articles outside the usual legal literature, including works by Locke, Rawls, Hobbes, Kant, and Rorty. Fish uses this essay to respond to recent criticisms that law schools teach too many courses like this, which are not helpful to most students, who will, by and large, graduate to practice the law.

Most anyone who teaches in a university hears criticisms of this sort now and then. When you teach computer science, you hear them frequently. Most of our students graduate and enter practice the software development. How useful are the theory of computation and the principles of programming languages? Teach 'em Java Enterprise Edition and Eclipse and XSLT and Rails.

My recent entry Impractical Programming, With Benefits starts from the same basic premise that Fish starts from: There is more to know about the tools and methodologies we use in practice than meets the eye. Understanding why something is as it is, and knowing that something could be better, are valuable parts of a professional's preparation for the world.

Fish talks about these values in terms of the "purposive" nature of the enterprise in which we practice. You want to be able to thing about the bigger picture, because that determines where you are going and why you are going there. I like his connection to Searle's speech acts and how they help us to see how the story we tell gives rise to the meaning of the details in the story. He uses football as his example, but he could have used computer science.

He sums up his argument in this way

That understanding is what law schools offer (among other things). Law schools ask and answer the question, "What's the game here?"; the ins and outs of the game you learn later, as in any profession. The complaint ... is that law firms must teach their new hires tricks of the trade they never learned in their contracts, torts and (God forbid) jurisprudence classes. But learning the tricks would not amount to much and might well be impossible for someone who did not know -- in a deep sense of know -- what the trade is and why it is important to practice it.

Such a deep understanding is even more important in a discipline like computing, because our practices evolve at a much faster rate than legal practices. Our tools change even more frequently. When we taught functional programming ten or fifteen years ago, many of our students simply humored me. This wasn't going to help them with Windows programming, but, hey, they'd learn it for our sake. Now they live in a world where Scala, Clojure, and F# are in the vanguard. I hope what they learned in our Programming Languages course has helped them cope with the change. Some of them are even leading the charge.

The practical test of whether my Programming Languages students learned anything useful this semester will come not next year, but ten or fifteen years down the road. And, as I said in the Impractical Programming piece, a little whimsy can be fun in its own right, easy while it stretches your brain.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 12, 2011 4:43 PM

Getting Serious About Open, Honest Dialogue

A couple of weeks ago, I read this article about the syllabi that late author David Foster Wallace wrote for his intro lit courses. This weekend, I finally got to read the syllabi themselves. I've generally found Wallace's long fiction ponderous, but I enjoyed reading him at the scale of a syllabus.

He sounds like a good teacher. This passage, from pages 3-4 of syllabus, is an awesome encouragement and defense of asking questions in class:

Anybody gets to ask any question about any fiction-related issues she wants. No question about literature is stupid. You are forbidden to keep yourself from asking a question or making a comment because you fear it will sound obvious our unsophisticated or lame or stupid. Because critical reading and prose fiction are such hard, weird things to try to study, a stupid-seeming comment or question can end up being valuable or even profound. I am deadly serious about creating a classroom environment where everyone feels free to ask or speak about anything she wishes. So any student who groans, smirks, mimes machine-gunning or onanism, eye-rolls, or in any way ridicules some other student's in-class question/comment will be warned once in private and on the second offense will be kicked out of class and flunked, no matter what week it is. If the offender is male, I am also apt to find him off-campus and beat him up.

Wow.

Perhaps this stands out in greater relief to me as this semester's Programming Languages course winds down. We did not have a problem with students shutting down other students' desire to comment and inquire, at least not that I noticed. My problem was getting students to ask questions at all. Some groups of students take care of themselves in this regard; others need encouragement.

I didn't react quickly enough this semester to recognize this as a too-quiet group and to do more to get them to open up. The real problem only became apparent to me at about the 75% mark of the course. It has been driven home further over the last couple of weeks, as a few students have begun to ask questions in preparation for the final. Some of their misconceptions run deep, and we would have all been better off to uncover and address them long ago. I'll be more careful next time.

The above paragraph sets a high standard, one I'm not sure I have the energy or acumen to deliver. Encouragement and policies like this create a huge burden on instructor, who must to walk a very tough walk. Promises made and unkept are usually worse than promises never made at all. This is especially true when trust they seek to develop involve the fears and personal integrity of student. With promises like these, the professor's personal integrity is on the line, too.

Still, I can aspire to do more. Even if I don't reach Wallace's level, perhaps I can make my course enough better that students will achieve more.

(And I love reading a syllabus that makes me look up the definition of a word. Those literature professors...)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 02, 2011 4:28 PM

Impractical Programming, with Benefits

I linked to Jacob Harris's recent In Praise of Impractical Programming in my previous entry, in the context of programming's integral role in the modern newsroom. But as the title of his article indicates, it's not really about the gritty details of publishing a modern web site. Harris rhapsodizes about wizards and the magic of programming, and about a language that is for many of my colleagues the poster child for impractical programming languages, Scheme.

If you clicked through to the article but stopped reading when you ran into MIT 6.001, you might want to go back and finish reading. It is a story of how one programmer looks back on college courses that seemed impractical at the time but that, in hindsight, made him a better programmer.

There is a tension in any undergraduate CS program between the skills and languages of today and big ideas that will last. Students naturally tend to prefer the former, as they are relevant now. Many professors -- though not all -- prefer academic concepts and historical languages. I encounter this tension every time I teach Programming Languages and think, should I continue to use Scheme as the course's primary language?

As recently as the 1990s, this question didn't amount to much. There weren't any functional programming languages at the forefront of industry, and languages such as C++, Java, and Ada didn't offer the course much.

But now there are Scala and Clojure and F#, all languages in play in industry, not too mention several "pretty good Lisps". Wouldn't my students benefit from the extensive libraries of these languages? Their web-readiness? The communities connecting the languages to Hadoop and databases and data analytics?

I seriously consider these questions each time I prep the course, but I keep returning to Scheme. Ironically, one reason is precisely that it doesn't have all those things. As Harris learned,

Because Scheme's core syntax is remarkably impoverished, the student is constantly pulling herself up by her bootstraps, building more advanced structures off simpler constructs.

In a course on the principles of programming languages, small is a virtue. We have to build most of what we want to talk about. And there is nothing quite so stark as looking at half a page of code and realizing, "OMG, that's what object-oriented programming is!", or "You mean that's all a variable reference is?" Strange as it may sound, the best way to learn deeply the big concepts of language may be to look at the smallest atoms you can find -- or build them yourself.

Harris "won't argue that "journalism schools should squander ... dearly-won computer-science credits on whimsical introductions to programming" such as this. I won't even argue that we in CS spend too many of our limited credits on whimsy. But we shouldn't renounce our magic altogether, either, for perfectly practical reasons of learning.

And let's not downplay too much the joy of whimsy itself. Students have their entire careers to muck around in a swamp of XML and REST and Linux device drivers, if that's what they desire. There's something pretty cool about watching Dr. Racket spew, in a matter of a second or two, twenty-five lines of digits as the value of a trivial computation.

As Harris says,

... if you want to advance as a programmer, you need to take some impractical detours.

He closes with a few suggestions, none of which lapse into the sort of navel-gazing and academic irrelevance that articles like this one sometimes do. They all come down to having fun and growing along the way. I second his advice.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 30, 2011 12:07 PM

Being Less Helpful

I've noticed myself being less helpful in the last couple of weeks. Students have come to me with questions -- about homework questions, and quizzes earlier in the semester, and topics we've covered in lecture. Advisees have come to me with questions about degree requirements and spring registration. In most cases, they expect to receive answers: direct, specific, concrete solutions to their problems.

But lately, that's not what I've been giving most of them.

I've been asking a lot of questions myself. I've pointed them to specific resources: lecture notes, department web pages, and the university catalog. All of these resources are available on-line. In the case of my course, detailed lectures notes, all assignments, and all source code are posted on the course web site. Answers to many questions are in there, sometimes lurking, other times screaming in the headlines.

But mostly, I've been asking a lot of questions.

I am not trying to be unhelpful. I'm trying to be more helpful in the longer run. Often, students have forgotten that we had covered a topic in detail in class. They will be better off re-reading the material, engaging it again, than being given a rote answer now. If they've read the material and still have questions, the questions are likely to be more focused. We will be able to discuss more specific misunderstandings. I will usually be able to diagnose a misconception more accurately and address it specifically.

In general, being less helpful is essential to helping students learn to be self-sufficient, to learn better study skills, and to develop better study habits. It's just as important a part of their education as any lecture on higher-order procedures.

(Dan Meyer has been agitating around this idea for a long time in how we teach K-12 mathematics. I often find neat ways to think about being less helpful on his blog, especially in the design of the problems I give my students.)

However, I have to be careful not to be cavalier about being less helpful. First of all, it's easy for being less helpful to morph into being unhelpful. Good habits can become bad habits when left untended.

Second, and more important, trends in the questions that students ask can indicate larger issues. Sometimes, they can do something better to improve their learning, and sometimes I can do something better. For example, from my recent run of being less helpful, I've learned that...

  • Students are not receiving the same kind of information they used to receive about scheduling and programs of study. To address this, I'll be communicating more information to my advisees earlier in the process.
  • Students in Programming Languages are struggling with a certain class of homework problems. To fix this, I plan to revamp several class sessions in the middle of the semester to help them learn some of these ideas better and sooner. I think I'll also provide more concrete pointers on a couple of homework assignments.

Being less helpful now is a good strategy only if both the students and I have done the right kind of work earlier.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 18, 2011 2:10 PM

Teachers Working Themselves Out Of a Job

Dave Rooney wrote a recent entry on what he does when coaching in the realm of agile software development. He summarizes his five main tasks as:

  • Listen
  • Ask boatloads of questions
  • Challenge assumptions
  • Teach/coach Agile practices
  • Work myself out of a job

All but the fourth is a part of my job every day. Listening, asking questions, and challenging assumptions are essential parts of helping people to learn, whatever one's discipline or level of instruction. As a CS prof, I teach a lot of courses that instruct or require programming, and I look for opportunities to inject pragmatic programming skills and agile development practices.

What of working myself out of a job? For consultants like Rooney, this is indeed the goal: help an organization get on a track where they don't need his advice, where they can provide coaching from inside, and where they become sufficient in their own practices.

In a literal sense, this is not part of my job. If I do my job well, I will remain employed by the same organization, or at least have that option available to me.

But in another sense, my goals with respect to working myself out of a job are the same as as a consultant's, only at the level of individual students. I want to help students reach a point where they can learn effectively on their own. As much as possible, I hope for them to become more self-sufficient, able to learn as an equal member of the larger community of programmers and computer scientists.

A teacher's goal is, in part, to prepare students to move on to a new kind of learning, where they don't need us to structure the learning environment or organize the stream of ideas and information they learn from. Many students come to us pretty well able to do this already; they need only to realize that they don't me!

With most universities structured more around courses than one-on-one tutorials, I don't get to see the process through with every student I teach. One of the great joys is to have the chance to work with the same student many times over the years, through multiple courses and one-on-one through projects and research.

In any case, I think it's healthy for teachers to approach their jobs from the perspective of working themselves out of a job. Not to worry; there is always another class of students coming along.

Of course, universities as we know them may be disappearing. But the teachers among us will always find people who want to learn.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 16, 2011 2:38 PM

Memories of a Programming Assignment

Last Friday afternoon, the CS faculty met with the department's advisory board. The most recent addition to the board is an alumnus who graduated a decade ago or so. At one point in the meeting, he said that a particular programming project from his undergraduate days stands out in his mind after all these years. It was a series of assignments from my Object-Oriented Programming course called Cat and Mouse.

I can't take credit for the basic assignment. I got the idea from Mike Clancy in the mid 1990s. (He later presented his version of the assignment on the original Nifty Assignments panel at SIGCSE 1999.) The problem includes so many cool features, from coordinate systems to stopping conditions. Whatever one's programming style or language, it is a fun challenge. When done OO in Java, with great support for graphics, it is even more fun.

But those properties aren't what he remembers best about the project. He recalls that the project took place over several weeks and that each week, I changed the requirements of assignment. Sometimes, I added a new feature. Other times, I generalized an existing feature.

What stands out in his mind after all these years is getting the new assignment each week, going home, reading it, and thinking,

@#$%#^. I have to rewrite my entire program.

You, see he had hard-coded assumptions throughout his code. Concerns were commingled, not separated. Objects were buried inside larger pieces of code. Model was tangled up with view.

So, he started from scratch. Over the course of several weeks, he built an object-oriented system. He came to understand dynamic polymorphism and patterns such as MVC and decorator, and found ways to use them effectively in his code.

He remembers the dread, but also that this experience helped him learn how to write software.

I never know exactly what part of what I'm doing in class will stick most with students. From semester to semester and student to student, it probably varies quite a bit. But the experience of growing a piece of software over time in the face of growing and changing requirements is almost always a powerful teacher.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 12, 2011 10:40 AM

Tools, Software Development, and Teaching

Last week, Bret Victor published a provocative essay on the future of interaction design that reminds us we should be more ambitious in our vision of human-computer interaction. I think it also reminds us that we can and should be more ambitious in our vision of most of our pursuits.

I couldn't help but think of how Victor's particular argument applies to software development. First he defines "tool":

Before we think about how we should interact with our Tools Of The Future, let's consider what a tool is in the first place.

I like this definition: A tool addresses human needs by amplifying human capabilities.

a tool addresses human needs by amplifying human capabilities

That is, a tool converts what we can do into what we want to do. A great tool is designed to fit both sides.

The key point of the essay is that our hands have much more consequential capabilities than our current interfaces use. They feel. They participate with our brains in numerous tactile assessments of the objects we hold and manipulate: "texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them". Indeed, this tactile sense is more powerful than the touch-and-slide interfaces we have now and, in many ways, is more powerful than even sight. These tactile senses are real, not metaphorical.

As I read the essay, I thought of the software tools we use, from language to text editors to development processes. When I am working on a program, especially a big one, I feel much more than I see. At various times, I experience discomfort, dread, relief, and joy.

Some of my colleagues tell me that these "feelings" are metaphorical, but I don't think so. A big part of my affinity for so-called agile approaches is how these sensations come into play. When I am afraid to change the code, it often means that I need to write more or better unit tests. When I am reluctant to add a new feature, it often means that I need to refactor the code to be more hospitable. When I come across a "code smell", I need to clean up, even if I only have time for a small fix. YAGNI and doing the simplest thing that can possibly work are ways that I feel my way along the path to a more complete program, staying in tune with the code as I go. Pair programming is a social practice that engages more of my mind than programming alone.

Victor closes with some inspiration for inspiration:

In 1968 -- three years before the invention of the microprocessor -- Alan Kay stumbled across Don Bitzer's early flat-panel display. Its resolution was 16 pixels by 16 pixels -- an impressive improvement over their earlier 4 pixel by 4 pixel display.

Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad.

We can think bigger about so much of what we do. The challenge I take from Victor's essay is to think about the tools I to teach: what needs do they fulfill, and how well do they amplify my own capabilities? Just as important are the tools we give our students as they learn: what needs do they fulfill, and how well do they amplify our students' capabilities?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

November 05, 2011 10:09 AM

Is Computing Too Hard, Too Foreign, or Too Disconnected?

A lot of people are discussing a piece published in the New York Times piece yesterday, Why Science Majors Change Their Minds (It's Just So Darn Hard). It considers many factors that may be contributing to the phenomenon, such as low grades and insufficient work habits.

Grades are typically much lower in STEM departments, and students aren't used to getting that kind of marks. Ben Deaton argues that this sort of rigor is essential, quoting his undergrad steel design prof: "If you earn an A in this course, I am giving you a license to kill." Still, many students think that a low grade -- even a B! -- is a sign that they are not suited for the major, or for the specialty area. (I've had students drop their specialty in AI after getting a B in the foundations course.)

Most of the students who drop under such circumstances are more than capable of succeeding. Unfortunately, they have not usually developed the disciplined work habits they need to succeed in such challenging majors. It's a lot easier to switch to a different major where their current skills suffice.

I think there are two more important factors at play. On the first, the Times article paraphrases Peter Kilpatrick, Notre Dame's Dean of Engineering:

... it's inevitable that students will be lost. Some new students do not have a good feel for how deeply technical engineering is.

In computer science, our challenge is even bigger: most HS students don't have any clue at all what computer science is. My university is nearing the end of its fall visit days for prospective students, who are in the process of choosing a college and a major. The most common question I am asked is, "What is computer science?", or its cousin, "What do computer scientists do?". This question comes from even the brightest students, ones already considering math or physics. Even more students walk by the CS table with their parents with blank looks on their faces. I'm sure some are thinking, "Why consider a major I have no clue about?"

This issue also plagues students who decide to major in CS and then change their minds, which is the topic of the Times article. Students begin the major not really knowing what CS is, they find out that they don't like it as much as they thought they might, and they change. Given what they know coming into the university, it really is inevitable that a lot of students will start and leave CS before finishing.

On the second factor I think most important, here is the money paragraph from the Times piece:

But as Mr. Moniz (a student exceedingly well prepared to study engineering) sat in his mechanics class in 2009, he realized he had already had enough. "I was trying to memorize equations, and engineering's all about the application, which they really didn't teach too well," he says. "It was just like, 'Do these practice problems, then you're on your own.'" And as he looked ahead at the curriculum, he did not see much relief on the horizon.

I have written many times here about the importance of building instructions around problems, beginning with Problems Are The Thing. Students like to work on problems, especially problems that matter to someone in the world. Taken to the next level, as many engineering schools are trying to do, courses should -- whenever possible -- be built around projects. Projects ground theory and motivate students, who will put in a surprising effort on a project they care about or think matters in the world. Projects are also often the best way to help students understand why they are working so hard to memorize and practice tough material.

In closing, I can take heart that schools like mine are doing a better job retaining majors:

But if you take two students who have the same high school grade-point average and SAT scores, and you put one in a highly selective school like Berkeley and the other in a school with lower average scores like Cal State, that Berkeley student is at least 13 percent less likely than the one at Cal State to finish a STEM degree.

Schools tend to teach less abstractly than our research-focused sister schools. We tend to provide more opportunities early in the curriculum to work on projects and to do real research with professors. I think the other public universities in my state do a good job, but if a student is considering an undergrad STEM major, they will be much better served at my university.

There is one more reason for the better retention rate at the "less selective" schools: pressure. The students at the more selective schools are likely to be more competitive about grades and success than the students at the less selective schools. This creates an environment more conducive to learning for most students. In my department, we try not to "treat the freshman year as a 'sink or swim' experience and accept attrition as inevitable" for reasons of Darwinian competition. As the Times article says, this is both unfair to students and wasteful of resources.

By changing our curricula and focusing more on student learning than on how we want to teach, universities can address the problem of motivation and relevance. But that will leave us with the problem of students not really knowing what CS or engineering are, or just how technical and rigorous they need to be. This is an education problem of another sort, one situated in the broader population and in our HS students. We need to find ways to both share the thrill and help more people see just what the STEM disciplines are and what they entail.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 27, 2011 6:44 PM

A Perfect Place to Cultivate an Obsession

And I urge you to please notice when you are happy,
and exclaim or murmur or think at some point,
"If this isn't nice, I don't know what is."
-- Kurt Vonnegut, Jr.

I spent the entire day teaching and preparing to teach, including writing some very satisfying code. It was a way to spend a birthday.

With so much attention devoted to watching my students learn, I found myself thinking consciously about my teaching and also about some learning I have been doing lately, including remembering how to write idiomatic Ruby. Many of my students really want to be able to write solid, idiomatic Scheme programs to process little languages. I see them struggle with the gap between their desire and their ability. It brought to mind something poet Mary Jo Bang said in recent interview about her long effort to become a good writer:

For a long time the desire to write and knowing how to write well remained two separate things. I recognized good writing when I saw it but I didn't know how to create it.

I do all I can to give students examples of good programs from which they can learn, and also to help them with the process of good programming. In the end, the only way to close the gap is to write a lot of code. Writing deliberately and reflectively can shorten the path.

Bang sees the same in her students:

Industriousness can compensate for a lot. And industry plus imagination is a very promising combination.

Hard work is the one variable we all control while learning something new. Some of us are blessed with more natural capacity to imagine, but I think we can stretch our imaginations with practice. Some CS students think that they are learning to "engineer" software, a cold, calculating process. But imagination plays a huge role in understanding difficult problems, abstract problems.

Together, industry and time eventually close the gap between desire and ability:

And I saw how, if you steadily worked at something, what you don't know gradually erodes and what you do know slowly grows and at some point you've gained a degree of mastery. What you know becomes what you are. You know photography and you are a photographer. You know writing and you are a writer.

... You know programming, and you are a programmer.

Erosion and growth can be slow processes. As time passes, we sometimes find our learning accelerates, a sort of negative splits for mental exercise.

We work hardest when we are passionate about what we do. It's hard for homework assigned in school to arouse passion, but many of us professors do what we can. The best way to have passion is to pick the thing you want to do. Many of my best students have had a passion for something and then found ways to focus their energies on assigned work in the interest of learning the skills and gaining the experience they need to fulfill their passion.

One last passage from Bang captures perfectly for me what educators should strive to make "school":

It was the perfect place to cultivate an obsession that has lasted until the present.

As a teacher, I see a large gap between my desire to create the perfect place to cultivate an obsession and my ability to deliver. For, now the desire and the ability remain two separate things. I recognize good learning experiences when I see them, and occasionally I stumble into creating one, but I don't yet know how to create them reliably.

Hard work and imagination... I'll keep at it.

If this isn't nice, I don't know what is.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 24, 2011 7:38 PM

Simple/Complex Versus Easy/Hard

A few years ago, I heard a deacon give a rather compelling talk to a group of college students on campus. When confronted with a recommended way to live or act, students will often say that living or acting that way is hard. These same students are frustrated with the people who recommend that way of living or acting, because the recommenders -- often their parents or teachers -- act as if it is easy to live or act that way. The deacon told the students that their parents and teachers don't think it is easy, but they might well think it is simple.

How can this be? The students were confounding "simple" and "easy". A lot of times, life is simple, because we know what we should do. But that does not make life easy, because doing a simple thing may be quite difficult.

This made an impression on me, because I recognized that conflict in my own life. Often, I know just what to do. That part is simple. Yet I don't really want to do it. To do it requires sacrifice or pain, at least in the short term. To do it means not doing something else, and I am not ready or willing to forego that something. That part is difficult.

Switch the verb from "do" to "be", and the conflict becomes even harder to reconcile. I may know what I want to be. However, the gap between who I am and who I want to be may be quite large. Do I really want to do what it takes to get there? There may be a lot of steps to take which individually are difficult. The knowing is simple, but the doing is hard.

This gap surely faces college students, too, whether it means wanting to get better grades, wanting to live a healthier life, or wanting to reach a specific ambitious goal.

When I heard the deacon's story, I immediately thought of some of my friends, who like very much the idea of being a "writer" or a "programmer", but they don't really want to do the hard work that is writing or programming. Too much work, too much disappointment. I thought of myself, too. We all face this conflict in all aspects of life, not just as it relates to personal choices and values. I see it in my teaching and learning. I see it in building software.

I thought of this old story today when I watched Rich Hickey's talk from StrangeLoop 2011, Simple Made Easy. I had put off watching this for a few days, after tiring of a big fuss that blew up a few weeks ago over Hickey's purported views about agile software development techniques. I knew, though, that the dust-up was about more than just Hickey's talk, and several of my friends recommended it strongly. So today I watched. I'm glad I did; it is a good talk. I recommend it to you!

Based only on what I heard in this talk, I would guess that Hickey misunderstands the key ideas behind XP's practices of test-driven development and refactoring. But this could well be a product of how some agilistas talk about them. Proponents of agile and XP need to be careful not to imply that tests and refactoring make change or any other part of software development easy. They don't. The programmer still has to understand the domain and be able to think deeply about the code.

Fortunately, I don't base what I think about XP practices on what other people think, even if they are people I admire for other reasons. And if you can skip or ignore any references Hickey makes to "tests as guard rails" or to statements that imply refactoring is debugging, I think you will find this really is a very good talk.

Hickey's important point is that simple/complex and easy/hard are different dimensions. Simplicity should be our goal when writing code, not complexity. Doing something that is hard should be our goal when it makes us better, especially when it makes us better able to create simplicity.

Simplicity and complexity are about the interconnectedness of a system. In this dimension, we can imagine objective measures. Ease and difficulty are about what is most readily at hand, what is most familiar. Defined as they are in terms of a person's experience or environment, this dimension is almost entirely subjective.

And that is good because, as Hickey says a couple of times in the talk, "You can solve the familiarity problem for yourself." We are not limited to our previous experience or our current environment; we can take on a difficult challenge and grow.

a Marin mountain bike

Alan Kay often talks about how it is worth learning to play a musical instrument, even though playing is difficult, at least at the start. Without that skill, we are limited in our ability to "make music" to turning on the radio or firing up YouTube. With it, you are able make music. Likewise riding a bicycle versus walking, or learning to fly an airplane versus learning to drive a car. None of these skills is necessarily difficult once we learn them, and they enable new kinds of behaviors that can be simple or complex in their own right.

One of the things I try to help my students see is the value in learning a new, seemingly more difficult language: it empowers us to think new and different thoughts. Likewise making the move from imperative procedural style to OOP or to functional programming. Doing so stretches us. We think and program differently afterward. A bonus is that something that seemed difficult before is now less daunting. We are able to work more effectively in a bigger world.

In retrospect, what Hickey says about simplicity and complexity is actually quite compatible with the key principles of XP and other agile methods. Writing tests is a part of how we create systems that are as simple as we can in the local neighborhood of a new feature. Tests can also help us to recognize complexity as it seeps into our program, though they are not enough by themselves to help us see complexity. Refactoring is an essential part of how we eliminate complexity by improving design globally. Refactoring in the presence of unit tests does not make programming easy. It doesn't replace thinking about design; indeed, it is thinking about design. Unit tests and refactoring do help us to grapple with complexity in our code.

Also in retrospect, I gotta make sure I get down to St. Louis for StrangeLoop 2012. I missed the energy this year.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 13, 2011 3:10 PM

Learning and New Kinds of Problems

I recently passed this classic by Reg Braithwaite to a grad student who is reading in the areas of functional programming and Ruby. I love how Braithwaite prefaces the technical content of the entry with an exhortation to learners:

... to obtain the deepest benefit from learning a new language, you must learn to think in the new language, not just learn to translate your favourite programming language syntax and idioms into it.

The more different the thing you are learning from what you already know, the more important this advice. You are already good at solving the problems your current languages solve well!

And worse, when a new tool is applied to a problem you think you know well, you will probably dismiss the things the new tool does well. Look at how many people dismiss brevity of code. Note that all of the people ignore the statistics about the constant ratio between bugs and lines of code use verbose languages. Look at how many people dismiss continuation-based servers as a design approach. Note that all of them use programming languages bereft of control flow abstractions.

Real programmers know Y.

This is great advice for people trying to learn functional programming, which is all the rage these days. Many people come to a language like Scheme, find it lacking for the problems they have been solving in Python, C, and Java, and assume something is wrong with Scheme, or with functional programming more generally. It's easy to forget that the languages you know and the problems you solve are usually connected in a variety of ways, not the least of which for university students is that we teach them to solve problems most easily solved by the languages we teach them!

If you keep working on the problems your current language solves well, then you miss out on the strengths of something different. You need to stretch not only your skill set but also your imagination.

If you buy this argument, schedule some time to work through Braithwaite's derivation of the Y combinator in Ruby. It will, as my daughter likes to say, make your brain hurt. That's a good thing. Just like with physical exercise, sometimes we need to stretch our minds, and make them hurt a bit, on the way to making them stronger.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 10, 2011 2:56 PM

Making Your Own Mistakes

Earlier today, @johndmitchell retweeted a link from Tara "Miss Rogue" Hunt:

RT @missrogue: My presentation from this morning at #ennovation: The 10 Mistakes I've made...so you don't have to http://t.co/QE0DzF9tw

Danger ahead!

I liked the title and so followed the link to the slide deck. The talk includes a few good quotes and communicates some solid experience on how to fail as a start-up, and also how to succeed. I was glad to have read.

The title notwithstanding, though, be prepared. Other people making mistakes will not -- cannot -- save you from making the same mistakes. You'll have to make them yourself.

There are certain kinds of mistakes that don't need to be made again, but that happens when we eliminate an entire class of problems. As a programmer, I mostly don't have to re-make the mistakes my forebears made when writing code in assembly. They learned from their mistakes and made tools that shield me from the problems I faced. Now, I write code in a higher-level language and let the tools implement the right solution for me.

Of course, that means I face a new class of problems, or an old class of problems in a new way. So I make new kinds of mistakes. In the case of assembly and compilers, I am more comfortable working at that level and am thus glad to have been shielded from those old error traps, by the pioneers who preceded me.

Starting a start up isn't the sort of problem we are able to bypass so easily. Collectively, we aren't good at all at reliably creating successful start-ups. Because the challenges involve other people and economic forces, they will likely remain a challenge well into our future.

Warning, proceed at your risk!

Even though Hunt and other people who have tried and failed at start-ups can't save us from making these mistakes, they still do us a service when they reflect on their experiences and share with us. They put up guideposts that say "Danger ahead!" and "Don't go there!"

Why isn't that enough to save us? We may miss the signs in the noise of our world and walk into the thicket on our own. We may see the warning sign, think "My situation is different...", and proceed anyway. We may heed their advice, do everything we can to avoid the pitfall, and fail anyway. Perhaps we misunderstood the signs. Perhaps we aren't smart enough yet to solve the problem. Perhaps no one is, yet. Sometimes, we won't be until we have made the mistake once ourselves -- or thrice.

Despite this, it is valuable to read about our forebears' experiences. Perhaps we will recognize the problem part of the way in and realize that we need to turn around before going any farther. Knowing other people's experiences can leave us better prepared not to go too far down into the abyss. A mistake partially made is often better than a mistake made all the way.

If nothing else, we fail and are better able to recognize our mistake after we have made it. Other people's experience can help us put our own mistake into context. We may be able to understand the problem and solution better by bringing those other experiences to bear on our own experience.

While I know that we have to make mistakes to learn, I don't romanticize failure. We should take reasonable measures to avoid problems and to recognize them as soon as possible. That's the ultimate value in learning what Hunt and other people can teach us.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

October 06, 2011 3:21 PM

Programming != Teaching

A few weeks ago I wrote a few entries that made connections to Roger Rosenblatt's Unless It Moves the Human Heart: The Craft and Art of Writing. As I am prone to doing, I found a lot of connections between writing, as described by Rosenblatt, and programming. I also saw connections between teaching of writers and teaching of programmers. The most recent entry in that series highlighted how teachers want their students to learn how to think the same way, not how to write the same way.

Rosenblatt also occasionally explores similarities between writing and teaching. Toward the end of the book, he points out a very important difference between the two:

Wouldn't it be nice if you knew that your teaching had shape and unity, and that when a semester came to an end, you could see that every individual thing you said had coalesced into one overarching statement? But who knows? I liken teaching to writing, but the two enterprises diverge here, because any perception of a grand scheme depends on what the students pick up. You may intend a lovely consistency in what you're tossing them, but they still have to catch it. In fact, I do see unity to my teaching. What they see, I have no clue. It probably doesn't matter if they accept the parts without the whole. A few things are learned, and my wish for more may be plain vanity.

Novelists, poets, and essayists can achieve closure and create a particular whole. Their raw material are words and ideas, which the writer can make to dance. The writer can have an overarching statement in mind, and making it real is just a matter of hard work and time.

Programmers have that sort of control over their raw material, too. As a programmer, I relish taken on the challenge of a hard problem and creating a solution that meets the needs of a person. If I have a goal for a program, I can almost always make it happen. I like that.

Teachers may have a grand scheme in mind, too, but they have no reliable way of making sure that their scheme comes true. Their raw material consists not only of words and ideas. Indeed, their most important raw material, their most unpredictable raw material, are students. Try as they might, teachers don't control what students do, learn, or think.

I am acutely aware of this thought as we wrap up the first half of our programming languages course. I have introduced students to functional programming and recursive programming techniques. I have a pretty good idea what I hope they know and can do now, but that scheme remains in my head.

Rosenblatt is right. It is vanity for us teachers to expect students to learn exactly what we want for them. It's okay if they don't. Our job is to do what we can to help them grow. After that, we have to step aside and let them run.

Students will create their own wholes. They will assemble their wholes from the parts they catch from us, but also from parts they catch everywhere else. This is a good thing, because the world has a lot more to teach than I can teach them on my own. Recognizing this makes it a lot easier for me as a teacher to do the best I can to help them grow and then get out of their way.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 04, 2011 4:43 PM

Programming in Context: Digital History

Last April I mentioned The Programming Historian, a textbook aimed at a specific set of non-programmers who want or need to learn how to program in order to do their job in the digital age. I was browsing through the textbook today and came across a paragraph that applies to more than just historians or so-called applied programmers:

Many books about programming fall into one of two categories: (1) books about particular programming languages, and (2) books about computer science that demonstrate abstract ideas using a particular programming language. When you're first getting started, it's easy to lose patience with both of these kinds of books. On the one hand, a systematic tour of the features of a given language and the style(s) of programming that it supports can seem rather remote from the tasks that you'd like to accomplish. On the other hand, you may find it hard to see how the abstractions of computer science are related to your specific application.

I don't think this feeling is limited to people with a specific job to do, like historians or economists. Students who come to the university intending to major in Computer Science lose patience with many of our CS1 textbooks and CS1 courses for the very same reasons. Focusing too much on all the features of a language is overkill when you are just trying to make something work. The abstractions we throw at them don't have a home in their understanding of programming or CS yet and so seem, well, too abstract.

Writing for the aspiring applied programmer has an advantage over writing for CS1: your readers have something specific they want to do, and they know just what it is. Turkel and MacEachern can teach a subset of several tools, including Python and Javascript, focused on what historians want to be able to do. Greg Wilson and his colleagues can teach what scientists want and need to know, even if the book is pitched more broadly.

In CS1, your students don't have a specific task in mind and do eventually need to take a systematic tour of a language's features and to learn a programming style or three. They do, eventually, need to learn a set of abstractions and make sense of them in the context of several languages. But when they start, they are much like any other person learning to program: they would like to do something that matters. The problems we ask them to solve matter.

Guzdial, Ericson, and their colleagues have used media computation as context in which to learn how to program, with the idea that many students, CS majors and non-majors alike, can be enticed to manipulate images, sounds, and video, the raw materials out of which students' digital lives are now constructed. It's not quite the same -- students still need to be enticed, rather than starting with their own motivation -- but it's a shorter leap to caring than the run-off-the-mill CS textbook has to make.

Some faculty argue that we need a CS0 course that all students take, in which they can learn basic programming skills in a selected context before moving onto the major's first course. The context can be general enough, say, media manipulation or simple text processing on the web, that the tools students learn will be useful after the course whether they continue on or not. Students who elect to major in CS move on to take a systematic tour of a language's features, to learn about OO or FP style, and to begin learning the abstractions of the discipline.

My university used to follow this approach, back in the early and mid 1990s. Students had to take a one-year HS programming course or a one-semester programming course at the university before taking CS1. We dropped this requirement when faculty began asking, why shouldn't we put the same care into teaching low-level programming skills in CS1 as we do into teaching CS0? The new approach hasn't always been as successful as we hoped, due to the difficulty of finding contexts that motivate students as well as we want, but I think the approach is fundamentally sound. It means that CS1 may not teach all the things that it did when the course had a prerequisite.

That said, students who take one of our non-majors programming courses, C and Visual Basic, and then move decide to major in CS perform better on average in CS1 than students who come in fresh. We have work to do.

Finally, one sentence from The Programming Historian made me smile. It embodies the "programming for all" theme that permeates this blog:

Programming is for digital historians what sketching is for artists or architects: a mode of creative expression and a means of exploration.

I once said that being able to program is like having superhuman strength. But it is both more mundane and more magical than that. For digital historians, being able to program means being able to do the mundane, everyday tasks of manipulating text. It also gives digital historians a way to express themselves creatively and to explore ideas in ways hard to imagine otherwise.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 03, 2011 8:11 AM

What to Build and How to Build

Update: This entry originally appeared on September 29. I bungled my blog directory and lost two posts, and the simplest way to get the content back on-line is to repost.

I remember back in the late 1990s and early 2000s when patterns were still a hot topic in the software world, and many pattern writers trying to make the conceptual move to pattern languages. It was a fun time to talk about software design. At some point, there was a long and illuminating discussion on the patterns mailing list about whether patterns should describe what to build or how to build. Richard Gabriel and Ron Goldman -- creators of the marvelous essay-as-performance-art Mob Software -- patiently taught the community that the ultimate goal is what. Of course, if we move to a higher level of abstraction, a what-pattern becomes a how-pattern. But the most valuable pattern languages teach us what to build and when, with some freedom in the how.

This is the real challenge that novice programmers face, in courses like CS1 or in self-education: figuring out what to build. It is easy enough for many students to "get" the syntax of the programming language they are learning. Knowing when to use a loop, or a procedure, or a class -- that's the bigger challenge.

Our CS students are usually in the same situation even later in their studies. They are still learning what to build, even as we teach them new libraries, new languages, and new styles.

I see this a lot when students who are learning to program in a functional style. Mentally, many think they are focused on the how (e.g., How do I write this in Scheme?). But when we probe deeper, we usually find that they are really struggling with what to say. We spend some time talking about the problem, and they begin to see more clearly what they are trying to accomplish. Suddenly, writing the code becomes a lot easier, if not downright easy.

This is one of the things I really respect in the How to Design Programs curriculum. Its design recipes give beginning students a detailed, complete, repeatable process for thinking about problems and what they need to solve a new problem. Data, contracts, and examples are essential elements in understanding what to build. Template solutions help bridge the what and the how, but even they are, at the student's current level of abstraction, more about what than how.

The structural recursion patterns I use in my course are an attempt to help students think about what to build. The how usually follows directly from that. As students become fluent in their functional programming language, the how is almost incidental.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 03, 2011 7:20 AM

Softmax, Recursion, and Higher-Order Procedures

Update: This entry originally appeared on September 28. I bungled my blog directory and lost two posts, and the simplest way to get the content back on-line is to repost.

John Cook recently reported that he has bundled up some of his earlier writings about the soft maximum as a tech report. The soft maximum is "a smooth approximation to the maximum of two real variables":

    softmax(x, y) = log(exp(x) + exp(y))

When John posted his first blog entry about the softmax, I grabbed the idea and made it a homework problem for my students, who were writing their first Scheme procedures. I gave them a link to John's page, so they had access to this basic formula as well as a Python implementation of it. That was fine with me, because I was simply trying to help students become more comfortable using Scheme's unusual syntax:

    (define softmax
      (lambda (x y)
        (log (+ (exp x)
                (exp y)))))

On the next assignment, I asked students to generalize the definition of softmax to more than two variables. This gave them an opportunity to write a variable arity procedure in Scheme. At that point, they had seen only a couple simple examples of variable arity, such as this implementation of addition using a binary + operator:

    (define plus              ;; notice: no parentheses around
      (lambda args            ;; the args parameter in lambda
        (if (null? args)
            0
            (+ (car args) (apply plus (cdr args))) )))

Many students followed this pattern directly for softmax:

    (define softmax-var
      (lambda args
        (if (null? (cdr args))
            (car args)
            (softmax (car args)
                     (apply softmax-var (cdr args))))))

Some of their friends tried a different approach. They saw that they could use higher-order procedures to solve the problem -- without explicitly using recursion:

    (define softmax-var
      (lambda args
        (log (apply + (map exp args)))))

When students saw each other's solutions, they wondered -- as students often do -- which one is correct?

John's original blog post on the softmax tells us that the function generalizes as we might expect:

    softmax(x1, x2, ..., xn) = log(exp(x1) + exp(x2) + ... + exp(xn))

Not many students had looked back for that formula, I think, but we can see that it matches the higher-order softmax almost perfectly. (map exp args) constructs a list of the exp(xi) values. (apply + ...) adds them up. (log ...) produces the final answer.

What about the recursive solution? If we look at how its recursive calls unfold, we see that this procedure computes:

    softmax(x1, softmax(x2, ..., softmax(xn-1, xn)...))

This is an interesting take on the idea of a soft maximum, but it is not what John's generalized definition says, nor is it particularly faithful to the original 2-argument function.

How might we roll our own recursive solution that computes the generalized function faithfully? The key is to realize that the function needs to iterate not over the maximizing behavior but the summing behavior. So we might write:

    (define softmax-var
      (lambda args
        (log (accumulate-exps args))))

(define accumulate-exps (lambda (args) (if (null? args) 0 (+ (exp (car args)) (accumulate-exps (cdr args))))))

This solution turns softmax-var into interface procedure and then uses structural recursion over a flat list of arguments. One advantage of using an interface procedure is that the recursive procedure accumulate-exps no longer has to deal with variable arity, as it receives a list of arguments.

It was remarkable to me and some of my students just how close the answers produced by the two student implementations of softmax were, given how different the underlying behaviors are. Often, the answers were identical. When different, they differed only in the 12th or 15th decimal digit. As several blog readers pointed out, softmax is associative, so the two solutions are identical mathematically. The differences in the values of the functions result from the vagaries of floating-point precision.

The programmer in me left the exercise impressed by the smoothness of the soft maximum. The idea is resilient across multiple implementations, which makes it seem all the more useful to me.

More important, though, this programming exercise led to several interesting discussions with students about programming techniques, higher-order procedures, and the importance of implementing solutions that are faithful to the problem domain. The teacher in me left the exercise pleased.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

September 27, 2011 8:08 PM

Following Through: A Contest Rewarding Epic Failure

As promised back in May, I am offering a prize for the best failed idea of the semester in the my programming languages course. We are now about halfway through the second unit of the course, on recursive programming techniques. Students finally have tools powerful enough to solve interesting problems, but also tools powerful enough to go spectacularly astray. I want to encourage students to trust their ideas enough to fail, but that's how they will learn to trust their ideas enough to succeed. Rather than fear those inevitable failures, we will have fun with them.

Feel free to check out describing the contest and how to participate. As always, I welcome feedback from my readers.

In particular, I am still looking for a catchy, evocative, culturally relevant, funny name for the prize. If this were the late 1980s or early 1990s, I might name it for the film Ishtar, and everyone would know what I meant. For now, I have opted for a more recent high-profile film flop -- a spectacular loser on the balance sheet -- but it doesn't have the same pop for me. I'm not as hip as I used to be and can use some help.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 24, 2011 8:04 PM

Much Code To Study, Learning with Students

I mentioned last time that I've been spending some time with Norvig's work on Strachey's checkers program in CPL. This is fun stuff that can be used in my programming languages course. But it isn't the only new stuff I've been learning. When you work with students on research projects and independent studies, opportunities to learn await at every turn.

A grad student is taking the undergrad programming languages course and so has to do some extra projects to earn his grad credit. He is a lover of Ruby and has been looking at a couple of Scheme interpreters implemented in Ruby, Heist and Bus-Scheme. I'm not sure where this will lead yet, but that is part of the exercise. The undergrad who faced the "refactor or rewrite?" decision a few weeks ago teaches me something new every week, not only through his experiences writing a language processor but also about his program's source and target languages, Photoshop and HTML/CSS.

Another grad student is working on a web application and teaching me other things about Javascript. Now we are expanding into one tool I've long wanted to study in greater detail, Processing.js and perhaps into another I only just learned of from Dave Humphrey, a beautiful little data presentation library called D3.

And as if that weren't enough, someone tweets that Avdi Grimm is sharing is code and notes as he implements Smalltalk Best Practice Patterns in Ruby. Awesome. This Avdi guy is rapidly becoming one of my heroes.

All of these projects are good news. One of the great advantages of working at a university is working with students and learning along with him. Right now, I have a lot on my plate. It's daunting but fun.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 23, 2011 3:52 PM

Grading and Learning in the Age of Social Media

Yesterday morning, I was grading the first quiz from my programming languages course and was so surprised by the responses to the first short-answer question that I tweeted in faux despair:

Wow. We discussed a particular something every day in class for three weeks. Student quiz answers give no evidence of this.

Colleagues around the globe consoled me and commiserated. But I forgot that I am also followed by several of my student, and their reaction was more like... panic. Even though I soon followed up with a tweet saying that their Scheme code made me happy, they were alarmed about that first tweet.

It's a new world. I never used to grade with my students watching or to think out loud while I was grading. Twitter changes that, unless I change me and stop using Twitter. On balance, I think I'm still better off. When I got to class, students all had smiles on their faces, some more nervous than others. We chatted. I did my best to calm them. We made a good start on the day with them all focused on the course.

We have reached the end of Week 5, one-third of the way through the course. Despite the alarm I set off in students' minds, they have performed on par with students in recent offerings over the first three homeworks and the first quiz. At this point, I am more concerned with my performance than theirs. After class yesterday, I was keenly aware of the pace of the session being out of sync with the learning curve of material. The places where I've been slowing down aren't always the best places to slow down, and the places where I've been speeding up (whether intentional or unintentional) aren't always the best places to speed up. A chat with one student that afternoon cemented my impression.

Even with years of experience, teaching is hard to get right. One shortcoming of teaching a course only every third semester is that the turnaround time on improvements is so long. What I need to do is use my realization to improve the rest of this offering, first of all this unit on recursive programming.

I spent some time early this week digging into Peter Norvig's Prescient but Not Perfect, a reconsideration of Christopher Strachey's 1966 Sci Am article and in particular Strachey's CPL program to play checkers. Norvig did usual wonderful job with the code. It's hard to find a CPL compiler these days, and has been since about 1980, so he wrote a CPL-to-Python translator, encoded and debugged Strachey's original program, and published the checkers program and a literate program essay that explains his work.

This is, of course, a great topic for a programming languages course. Norvig exemplifies the attitude I encourage to my students on Day 1: if you need a language processor, write one. It's just another program. I am not sure yet when I will bring this topic into my course; perhaps when we first talk in detail about interpreters, or perhaps when we talk about parsing and parser generators. (Norvig uses YAPPS, a Python parser generator, to convert a representation of CPL's grammar into a CPL parser written in Python.)

There are some days when I had designed all of my course sessions to be 60 minutes instead of 75 minutes, so that we had more lüft for opportunistic topics like this one. Or that I could design a free day into the course every 2-3 weeks for the same purpose. Alas, the CS curriculum depends on this course to expose students to a number of important ideas and practices, and the learning curve for some of the material is non-trivial. I'll do my best to provide at least a cursory coverage of Norvig's article and program. I hope that a few students will turn out to his approach to the world -- the computer scientist's mindset.

If nothing else, working through his paper and code excite me, and that will leak over into the rest of my work.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 16, 2011 4:13 PM

The Real Technological Revolution in the Classroom Hasn't Happened Yet

Earlier this month, the New York Times ran a long article exploring the topic of technology in K-12 classrooms, in particular the lack of evidence that the mad rush to create the "classroom of future" is having any effect on student learning. Standardized test scores are stagnant in most places, and in schools showing improvements, research has not been able to separate the effect of using technology from the effect of extra teacher training.

We should not be surprised. It is unlikely that simply using a new technology will have any effect on student learning. If we teach the same way and have students do the same things, then we should expect student learning to be about the same whether they are writing on paper, using a typewriter, or typing on a computer keyboard. There are certainly some very cool things one can do with, say, Keynote, and I think having those features available can augment a student's experience. But I doubt that those features can have a substantial effect on learning. Technology is like a small booster rocket for students who are already learning a lot and like training wheels for those who are not.

As I read that article, one fact screamed out at me. Computers are being used in classrooms everywhere for almost everything. Everything, that is, except the one thing that makes them useful at all: computer programming.

After reading the Times piece, Clive Thompson pulled the critical question out of it and asked, What can computers teach that textbooks and paper can't? Mark Guzdial has written thoughtfully on this topic in the past as well. Thompson offers two answers: teaching complexity and seeing patterns, (His third answer is a meta-answer, more effective communication between teacher and student.) We can improve both teaching complexity and seeing patterns by using the right software, but -- thankfully! -- Thompson points out that we can do even better if we teach kids even a little computer programming.

Writing a little code is a great vehicle for exploring a complex problem and trying to create and communicate understanding. Using or writing a program to visualize data and relationships among them is, too.

Of course, I don't have hard evidence for these claims, either. But it is human nature to want to tinker, to hack, to create. If I am going to go out on a limb without data, I'd rather do it with students creating tools that help them understand their world than with students mostly consuming media using canned tools. And I'm convinced that we can expand our students' minds more effectively by showing them how to program than by teaching most of what we teach in K-12. Programming can be tedious, like many learning tasks, and students need to learn how to work through tedium to deeper understanding. But programming offers rewards in a way and on a scale that, say, the odd problems at the end of a chapter in an algebra textbook can never do by themselves.

Mark Surman wrote a very nice blog entry this week, Mozilla as teacher, expressing a corporate vision for educating the web-using public that puts technology in context:

... people who make stuff on the internet are better creators and better online citizens if they know at least a little bit about the web's basic building blocks.

As I've written before, we do future teachers, journalists, artists, filmmakers, scientists, citizens, and curious kids a disservice if we do not teach them a little bit of code. Without this knowledge, they face unnecessary limits on their ability to write, create, and think. They deserve the ability to tinker, to hack, to trick out their digital worlds. The rest of us often benefit when they do, because some of things they create make all of our lives better.

(And increasingly, the digital world intersects with the physical world in surprising ways!)

I will go a step further than Surman's claim. I think that people who create and who are engaged with the world they inhabit have an opportunity to better citizens, period. They will be more able, more willing participants in civic life when they understand more clearly their connections to and dependence on the wider society around them. By giving them the tools they need to think more deeply and to create more broadly, education can enable them to participate in the economy of the world and improve all our lots.

I don't know if K-12 standardized test scores would get better if we taught more students programming, but I do think there would be benefits.

As I often am, I am drawn back to the vision Alan Kay has for education. We can use technology -- computers -- to teach different content in different ways, but ultimately it all comes back to new and better ways to think in a new medium. Until we make the choice to cross over into that new land, we can spend all the money we want on technology in the classroom and not change much at all.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 13, 2011 7:17 PM

Learning to Think the Same, Not Write the Same

We have begun to write code in my programming languages course. The last couple of sessions we have been writing higher-order procedures, and next time we begin a three-week unit learning to write recursive programs following the structural recursion patterns in my small pattern language Roundabout.

One of the challenges for students is to learn how to use the patterns without feeling a need to mimic my style perfectly. Some students, especially the better ones, will chafe if I try to make them write code exactly as I do. They are already good programmers in other styles, and they can become good functional programmers without aping me. Many will see the patterns as a restriction on how they think, though in fact the patterns are source of great freedom. They force you to write code in a particular way; they give you tools for thinking about problems as you program.

Again, there is something for us to learn from our writing brethren. Consider a writing course like the one Roger Rosenblatt describes in his book, Unless It Moves the Human Heart: The Craft and Art of Writing, which I have referred to several times, most recently in The Summer Smalltalk Taught Me OOP. No student in Rosenblatt's course wants him to expect them to leave the course writing just like he does. They are in the course to learn elements of craft, to share and critique work, and to get advice from someone with extensive experience as a writer. Rosenblatt is aware of this, too:

Wordsworth quoted Coleridge as saying that every poet must create the taste by which he is relished. The same is true of teachers. I really don't want my students to write as I do, but I want them to think about writing as I do. In them I am consciously creating a certain taste for what I believe constitutes skillful and effective writing.

The course is more about learning how to think about writing as much as it is about learning how to write itself. That's what a good pattern language can do for us: help us learn to think about a class of problems or a class of solutions.

I think this happens whether a teacher intends it consciously or not. Students learn how to think and do by observing their teachers thinking and doing. A programming course is usually better if the teacher designs the course deliberately, with careful consideration of the patterns to demonstrate and the order in which students experience them in problems and solutions.

In the end, I want my students to think about writing recursive programs as I do, because experience as both a programmer and as a teacher tells me that this way of thinking will help them become good functional programmers as soon as possible. But I do not want them to write exactly as I so; they need to find their own style, their own taste.

This is yet another example of the tremendous power teachers wield every time they step foot in a classroom. As a student, I was always most fond of the teachers wielded it carefully and deliberately. So many of them live on in me in how I think. As a teacher, I have come to respect this power in my own hands and try to wield it respectfully with my students.

P.S. For what it's worth, Coleridge is one of my favorite poets!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 12, 2011 12:14 PM

You Keep Using That Word....

Inigo Montoya, from The Princess Bride

Elisabeth Hendrickson recently posted a good short piece about Testing is a Whole Team Activity. She gives examples of some of the comments she hears frequently which indicate a misunderstanding about the relationship between coding and testing. My favorite example was the first:

"We're implementing stories up to the last minute, so we can never finish testing within the sprint."

Maybe I like this one so much because I hear it from students so often, especially on code that they find challenging and are having a hard time finishing.

"If we took the time to test our code, we would not get done on time."

"What evidence do you have that your code works for the example cases?"

"Well, none, really, but..."

"Then how do you know you are done?"

"'done'?" You keep using that word. I do not think it means what you think it means.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 06, 2011 9:00 PM

Two Kinds of 50% Student

I returned graded Homework 1 today at end of my Programming Languages class. I'm still getting to know faces and names in the group, so I paid attention to each person as he took his assignment from me. Occasionally, I glanced at the grade written atop the paper as I was handing it back. One grade caught my eye, and I made eye contact with the student. The grade was 10/20. Or it could have been 12/20, or 8/20, no matter. The point is the same: not an auspicious start to the semester.

Earlier, during class, I had noticed that this student was not taking notes. Sometimes, that's the sign of a really good student. He looked confident. Halfway through class, I gave a small exercise for students to work on, box-and-pointer diagrams for Scheme pairs and lists. Still nothing from the students. Perhaps his head was roiling in furious thought, but there was no visible evidence of thought. A calm sea.

When I saw the grade on his assignment, I leapt reflexively to a conclusion. This guy will be an early casualty in the course. He may drop early. He may limp along, scoring 50%, plus or minus a few points, all semester, and end up with a D or an F or, if he is truly fortunate, the C- he needs to count it toward his program of study.

That may seem like a harsh sort of profiling, but I've seen this situation many times, especially in this course. If you don't take Scheme seriously, if you don't take functional programming seriously, ideas and patterns and challenges can snowball quickly. I have a firend at another university who speaks of a colleague as telling his students on Day One of every course, There are only two ways to take my course: seriously, and again. That's how I feel about most of my courses, but especially this one. We can have a lot of fun, when students are present and engaged, seeking mastery. If not, well, it can be a long semester.

Don't think that my reaction means I will shirk my duty and let this student fail. When students struggle early, I try to engage them one-on-one, to see if we can figure out what the impediment is and how we night get them over it. Such efforts succeed too rarely for my tastes. Sadly, the behavior I see in class and on early homework is usually a window into their approach to the course.

So, I was sad for the student. I'll do what I can, but I felt guilty, like a swami looking into his crystal ball and seeing clearly a future that already is.

After class, though, I was sitting in my office and realized that all is not lost. I remembered that there is a second type of student who can start the course with this performance profile. That sort of student is rarer, but common enough to give me hope.

I did not actually think of a second set of students. I thought of a particular student from a recent graduating class. He started this course slowly. The profile wasn't quite the same, because this student took notes in class. Still, he seemed generally disengaged, and his homework scores were mediocre at best. Then, he did poorly on Quiz 1.

But at that point I noticed a change in his demeanor. He looked alive in class. He asked questions after class. He gave evidence of having read the assigned readings and of starting sooner on programs. His next homework was much better. His Quiz 2 was a complete reversal of the first.

This is the second kind of student: poor performance on the early homeworks and quizzes are a wake-up call. The student realizes, "I can't do what I've always done". He sees the initial grades in the course not as predictor of an immutable future but as a trigger to work differently, to work harder. The student changes his behavior, and as a result changes his results.

The particular student I remembered went on to excel in Programming Languages. He grasped many of the course's deeper concepts. Next he took the compilers course and was part of a strong team.

Sometimes, students are caught off guard by the foreign feel of the ideas in Scheme and functional programming, by the steep learning curve. The ones in the second group, the ones who usually succeed in the course after all, pay attention to early data and feed them back into their behavior.

So, there is hope for the student whose grade I glimpsed this afternoon. That strengthens my resolve to hang in there, offer help, and hope that I can reach him soon enough. I hope he is a Group 2 guy, or can be coaxed into crossing the border from Group 1 to Group 2.

The future is malleable, not frozen.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 04, 2011 7:28 PM

In Praise of Usefulness

In two recent entries [ 1 | 2 ], I mentioned that I had been recently reading Roger Rosenblatt's Unless It Moves the Human Heart: The Craft and Art of Writing. Many of you indulge me my fascination with writers talking about writing, and I often see parallels between what writers of code and writers of prose and poetry do. That Rosenblatt also connects writing to teaching, another significant theme of my blog, only makes this book more stimulating to me.

"Unless it moves the human heart" is the sort of thing writers say about their calling, but not something many programmers say. (The book title quotes Aussie poet A. D. Hope.) It is clearly at the heart of Rosenblatt's views of writing and teaching. But in his closing chapter, Rosenblatt includes a letter written to his students as a postscript on his course that speaks to a desire most programmers have for the lives' work: usefulness. To be great, he says, your writing must be useful to the world. The fiction writer's sense of utility may differ from the programmer's, but at one level the two share an honorable motive.

This paragraph grabbed me as advice as important for us programmers as it is for creative programmers. (Which software people do you think of as you read it?)

How can you know what is useful to the world? The world will not tell you. The world will merely let you know what it wants, which changes from moment to moment, and is nearly always cockeyed. You cannot allow yourself to be directed by its tastes. When a writer wonders, "Will it sell?" he is lost, not because he is looking to make an extra buck or two, but rather because, by dint of asking the question in the first place, he has oriented himself to the expectations of others. The world is not a focus group. The world is an appetite waiting to be defined. The greatest love you can show it is to create what it needs, which means you must know that yourself.

What a brilliant sentence: The world is an appetite waiting to be defined. I don't think Ward Cunningham went around asking people if they needed wiki. He built it and gave it to them, and when they saw it, their appetite took form. It is indeed a great form of love to create what the world needs, whether the people know it yet or not.

(I imagine that at least a few of you were thinking of Steve Jobs and the vision that gave us the Mac, iTunes, and the iPad. I was too, though Ward has always been my hero when it comes to making useful things I had not anticipated.)

Rosenblatt tells his students that, to write great stories and poems and essays, they need to know the world well and deeply. This is also sound advice to programmers, especially those who want to start the next great company or revolutionize their current employers from the inside out. This is another good reason to read, study, and think broadly. To know the world outside of one's Ruby interpreter, outside the Javascript spec and the HTML 5.0, one must live in it and think about it.

It seems fitting on this Labor Day weekend for us to think about all the people who make the world we live in and keep it running. Increasingly, those people are using -- and writing -- software to give us useful things.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 27, 2011 9:36 AM

Extravagant Ideas

One of the students in my just-started Programming Languages course recently mentioned that he has started a company, Glass Cannon Games, to write games for the Box and Android platforms. He is working out of my university's student incubator.

Last summer, I wrote a bit about entrepreneurship and a recent student of mine, Nick Cash, who has started Book Hatchery to "help authors publish their works digitally".

Going a bit further back, I mentioned an alumnus, Wade Arnold, winning a statewide award for his company, T8 Webware. Readers of this blog most recently encountered Wade in my entry on the power of intersections.

Over the last decade, Wade has taken a few big ideas and worked hard to make them real. That's what Nick and, presumably, Ian are doing, too.

Most entrepreneurs start with big thoughts. I try to encourage students to think big thoughts, to consider an entrepreneurial career. The more ideas they have, the more options they have in careers and in life. Going to work for a big company is the right path for some, but some want more and can do their own thing -- if only they have the courage to start.

This is a more important idea than just for starting start-ups. We can "think big and write small" even for the more ordinary programs we write. Sometimes we need a big idea to get us started writing code. Sometimes, we even need hubris. Every problem a novice faces can appear bigger than it is. Students who are able to think big often have more confidence. That is the confidence they need to start, and to persevere.

It is fun as a teacher to be able to encourage students to think big. As writer Roger Rosenblatt says,

One of the pleasures of teaching writing courses is that you can encourage extravagant thoughts like this in your students. These are the thoughts that will be concealed in plain and modest sentences when they write. But before that artistic reduction occurs, you want your students to think big and write small.

Many students come into our programming courses unsure, even a little afraid. Helping them free themselves to have extravagant ideas is one of the best things a teacher can do for them. Then they will be motivated to do the work they need to master syntax and idioms, patterns and styles.

A select few of them will go a step further and believe something even more audacious, that

... there's no purpose to writing programs unless you believe in significant ideas.

Those will be the students who start the Glass Cannons, the Book Hatcheries, and the T8s. We are all better off when they do.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 26, 2011 2:29 PM

Thoughts at the Start of the Semester

We have survived Week 1. This semester, I again get to teach Programming Languages, a course I love and about which I blog for a while every eighteen months of so.

I had thought I might blog as I prepped for the course, but between my knee and department duties, time was tight. I've also been slow to settle on new ideas for the course. In my blog ideas folder, I found notes for an entry debriefing the last offering of the course, from Spring 2010, and thought that might crystallize some ideas for me. Alas, the notes held nothing useful. They were just a reminder to write, which went unheeded during a May term teaching agile software development.

Yesterday I started reading a new book -- not a book related to my course, but Roger Rosenblatt's Unless It Moves the Human Heart: The Craft and Art of Writing. I love to read writers talking about writing, and this book has an even better premise: it is a writer talking about writing as he teaches writing to novices! So there is plenty of inspiration in it for me, even though it contains not a single line of Scheme or Ruby.

Rosenblatt recounts teaching a course a course called "Writing Everything". Most the students in the course want to learn how to write fiction, especially short stories. Rosenblatt has them also read write poems, in which they can concentrate on language and sounds, and essays, in which they can learn to develop ideas.

This is not the sort of course you find in CS departments. The first analogy that came to mind was a course in which students wrote, say, a process scheduler for an OS, a CRUD database app for a business, and an AI program. The breadth and diversity of apps might get the students to think about commonalities and differences in their programming practice. But a more parallel course would ask students to write a few procedural programs, object-oriented programs, and functional programs. Each programming style would let the student focus on different programming concepts and distinct elements of their craft.

I'd have a great time teaching such a "writing" course. Writing programs is fun and hard to learn, and we don't have many opportunities in a CS program to talk about the process of writing and revising code. Software engineering courses have a lot of their own content, and even courses on software design and development often deal more with new content than with programming practice. In most people's minds, there is not room for a new course like this one in the curriculum. In CS programs, we have theory and applications courses to teach. In Software Engineering programs, they seem far too serious about imitating other engineering disciplines to have room for something this soft. If only more schools would implement Richard Gabriel's idea of an MFA in software...

Despite all these impediments, I think a course in which students simply practiced programming in the large(r) and focused on their craft could be of great value to most CS grads.

I will let Rosenblatt's book inspire me and leak into my Programming Languages course where helpful. But I will keep our focus on the content and skills that our curriculum specifies for the course. By learning the functional style of programming and a lot about how programming languages work, students will get a chance to develop a few practical skills, which we hope will pay off in helping them to be better programmers all around, whether in Java, Python, Ruby, Scala, Clojure, or Ada.

One meta-message I hope to communicate both explicitly and implicitly is that programmers never stop learning, including their professor. Rosenblatt has the same goal in his writing course:

I never fail to say "we" to my students, because I do not want them to get the idea that you ever learn how to write, no matter how long you've done it.

Beyond that, perhaps the best I can do is let my students that I am still mesmerized by the cool things we are learning. As Rosenblatt says,

Observing a teacher who is lost in the mystery of the material can be oddly seductive.

Once students are seduced, they will take care of their own motivation and their own learning. They won't be able to help themselves.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 21, 2011 9:32 AM

Overcoming a Disconnect Between Knowing and Doing

Before reading interviews with Hemingway and Jobs, I read a report of Ansel Adams's last interview. Adams was one of America's greatest photographers of the 20th century, of course, and several of his experiences seem to me to apply to important issues in software development.

It turns out that both photography and software development share a disconnect between teaching and doing:

One of the problems is the teaching of photography. In England, I was told, there's an institute in which nobody can teach photography until they've had five years' experience in the field, until they've had to make a go of it professionally.

Would you recommend that?

I think that teachers should certainly have far more experience than most of the ones I know of have had. I think very few of them have had practical experience in the world. Maybe it's an impossibility. But most of the teachers work pretty much the same way. The students differ more from each other than the teachers do.

Academics often teach without having experience making a living from the material they teach. In computer science, that may make sense for topics like discrete structures. There is a bigger burden in most of the topics we teach, which are done in industry and which evolve at a more rapid rate. New CS profs usually come out of grad school on the cutting edge of their specialties, though not necessarily on top of all the trends in industry. Those who take research-oriented positions stay on the cutting edge of their areas, but the academic pressure is often to become narrower in focus and thus farther from contemporary practice. Those who take positions at teaching schools have to work really hard to stay on top of changes out in the world. Teaching a broad variety of courses makes it difficult to stay on top of everything.

Adams's comment does not address the long-term issue, but it takes a position on the beginning of careers. If every new faculty member had five years or professional programming experience, I dare say most undergrad CS courses would be different. Some of the changes might be tied too closely to those experiences (someone who spent five years at Rockwell Collins writing SRSs and coding in Ada would learn different things from someone who spent five years writing e-commerce sites in Rails), but I think would usually be some common experiences that would improve their courses.

When I first read Adams's comment, I was thinking about how the practitioner would learn and hone elements of craft that the inexperienced teacher didn't know. But the most important thing that most practitioners would learn is humility. It's easy to lecture rhapsodically about some abstract approach to software development when you haven't felt the pain it causes, or faced the challenges left even when it succeeds. Humility can be a useful personal characteristic in a teacher. It helps us see the student's experience more accurately and to respond by changing how and what we teach.

Short of having five years of professional experience, teachers of programming and software development need to read and study all the time -- and not just theoretical tomes, but also the work of professional developers. Our industry is blessed with great books by accomplished developers and writes, such as Design Patterns and Refactoring. The web and practitioners' conferences such as StrangeLoop are an incredible resource, too. As Fogus tweeted recently, "We've reached an exciting time in our industry: colleges professors influenced by Steve Yegge are holding lectures."

Other passages in the Adams interview stood out to me. When he shared his intention to become a professional photographer, instead of a concert pianist:

Some friends said, "Oh, don't give up music. ... A camera cannot express the human soul." The only argument I had for that was that maybe the camera couldn't, but I might try through the camera.

What a wonderful response. Many programmers feel this way about their code. CS attracts a lot of music students, either during their undergrad studies or after they have spent a few years in the music world. I think this is one reason: they see another way to create beauty. Good news for them: their music experience often gives them an advantage over those who don't have it. Adams believed that studying music was valuable to him as a photographer:

How has music affected your life?

Well, in music you have this absolutely necessary discipline from the very beginning. And you are constructing various shapes and controlling values. Your notes have to be accurate or else there's no use playing. There's no casual approximation.

Discipline. Creation and control. Accuracy and precision. Being close isn't good enough. That sounds a lot like programming to me!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 18, 2011 3:52 PM

Some Thoughts on "Perlis Languages"

Alan Perlis

Fogus recently wrote a blog entry, Perlis Languages, that has traveled quickly through parts of software world. He bases his title on one of Alan Perlis's epigrams: "A language that doesn't affect the way you think about programming is not worth knowing." Long-time Knowing and Doing readers may remember this quote from my entry, Keeping Up Versus Settling Down. If you are a programmer, you should read Fogus's article, which lists a few languages he thinks might change how you think about programming.

There can be no single list of Perlis languages that works for everyone. Perlis says that a language is worth knowing if it affects how you think about programming. That depends on you: your background, your current stage of development as a programmer, and the kind of problems you work on every day. As an example, in the Java world, the rise of Scala and Clojure offered great opportunities for programmers to expand their thinking about programming. To Haskell and Scheme programmers, the opportunity was much smaller, perhaps non-existent.

The key to this epigram is that each programmer should be thinking about her knowledge and on the look out for languages that can expand her mind. For most of us, there is plenty of room for growth. We tend to work in one or two styles on a daily basis. Languages that go deep in a different style or make a new idea their basic building block can change us.

That said, some languages will show up lots of peoples' Perlis lists, if only because they are so different from the languages most people know and use on a daily basis. Lisp is one of the languages that used to be a near universal in this regard. It has a strangely small and consistent syntax, with symbols as first-order objects, multiple ways to work with functions, and macros for extending the language in a seamless way. With the appearance of Clojure, more and more people are being exposed to the wonders of Lisp, so perhaps won't be on everyone's Perlis list in 10 years. Fogus mentions Clojure only in passing; he has written one of the better early Clojure books, and he doesn't want to make a self-serving suggestion.

I won't offer my own Perlis list here. This blog often talks about languages that interest me, so readers have plenty of chances to hear my thoughts. I will add my thoughts about two of the languages Fogus mentions in his article.

Joy. *Love* it! It's one of my favorite little languages, and one that remains very different from what most programmers know. Scripting languages have put a lot of OOP and functional programming concepts before mainstream programmers across the board, but the idea of concatenative programming is still "out there" for most.

Fogus suggests the Forth programming language in this space. I cannot argue too strongly against this and have explained my own fascination with it in a previous entry. Forth is very cool. Still, I prefer Joy as a first step into the world of concatenative programming. It is clean, simple, and easy to learn. It is also easy to write a Joy interpreter in your favorite language, which I think is one of the best ways to grok a language in a deep way. As I mentioned in the Forth entry linked above, I spent a few months playing with Joy and writing an interpreter for it while on sabbatical a decade ago.

If you play with Joy and like it, you may find yourself wanting more than Joy offers. Then pick up Forth. It will not disappoint you.

APL. Fogus says, "I will be honest. I have never used APL and as a result find it impenetrable." Many things are incomprehensible before we try them. (A student or two will be telling me that Scheme is incomprehensible in the next few weeks...) I was fortunate to write a few programs in APL back in my undergrad programming languages course. I'm sure if I wrote a lot of APL it would become more understandable, but every time I return to the language, it is incomprehensible again to me for a while.

David Ungar told one of my favorite APL stories at OOPSLA 2003, which I mentioned in my report on his keynote address. The punchline of that story fits very well with the theme of so-called Perlis languages: "They could have done the same thing [I] did in APL -- but they didn't think of it!"

There are modern descendants of APL, but I still think there is something special about the language's unique character set. I miss the one-liners consisting or five or twenty Greek symbols, punctuation, and numbers, which accomplished unfathomable tasks such as implementing a set of accounting books.

I do second Fogus's reason for recommending APL despite never having programmed in it: creator Kenneth Iverson's classic text, A Programming Language. It is an unusually lucid account of the design of a programming language -- a new language, not an adaptation of a language we already know. Read it. I had the wonderful opportunity to meet Iverson when he spoke at Michigan State in the 1980s, as described in my entry on Iverson's passing.

... So, I encourage you to follow the spirit of Fogus's article, if not its letter. Find the languages that can change how you think, and learn them. I begin helping a new set of students on this path next week, when we begin our study of Scheme, functional programming, and the key ideas of programming languages and styles.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 13, 2011 2:47 PM

Trust People, Not Technology

Another of the interviews I've read recently was The Rolling Stone's 1994 interview with Steve Jobs, when he was still at NeXT. This interview starts slowly but gets better as it goes on. The best parts are about people, not about technology. Consider this, on the source Jobs' optimism:

Do you still have as much faith in technology today as you did when you started out 20 years ago?

Oh, sure. It's not a faith in technology. It's faith in people.

Explain that.

Technology is nothing. What's important is that you have a faith in people, that they're basically good and smart, and if you give them tools, they'll do wonderful things with them. It's not the tools that you have faith in -- tools are just tools. They work, or they don't work. It's people you have faith in or not.

I think this is a basic attitude held by many CS teachers, about both technology and about the most important set of people we work with: students. Give them tools, and they will do wonderful things with them. Expose them to ideas -- intellectual tools -- and they will do wonderful things. This mentality drives me forward in much the same way as Jobs's optimism does about the people he wants to use Apple's tools.

I also think that this is an essential attitude when you work as part of a software development team. You can have all the cool build, test, development, and debugging tools money can buy, but in the end you are trusting people, not technology.

Then, on people from a different angle:

Are you uncomfortable with your status as a celebrity in Silicon Valley?

I think of it as my well-known twin brother. It's not me. Because otherwise, you go crazy. You read some negative article some idiot writes about you -- you just can't take it too personally. But then that teaches you not to take the really great ones too personally either. People like symbols, and they write about symbols.

I don't have to deal with celebrity status in Silicon Valley or anywhere else. I do get to read reviews of my work, though. Every three years, the faculty of my department evaluate my performance as part of the dean's review of my work and his decision to consider for me another term. I went through my second such review last winter. And, of course, frequent readers here have seen my comments on student assessments, which we do at the end of each semester. I wrote about assessments of my spring Intelligent Systems course back in May. Despite my twice annual therapy sessions in the form of blog entries, I have a pretty good handle on these reviews, both intellectually and emotionally. Yet there is something visceral about reading even one negative comment that never quite goes away. Guys like Jobs probably do there best not to read newspaper articles and unsolicited third-party evals.

I'll have to try the twin brother gambit next semester. My favorite lesson from Jobs's answer, though, is the second part: While you learn to steel yourself against bad reviews, you learn not to take the really great ones too personally, either. Outliers is outliers. As Kipling said, all people should count with you, but none too much. The key in these evaluations to gather information and use it to improve your performance. And that most always comes out of the middle of the curve. Treating raves and rants alike with equanimity keeps you humble and sane.

Ultimately, I think one's stance toward what others say comes back to the critical element in the first passage from Jobs: trust. If you trust people, then you can train yourself to accept reviews as a source of valuable information. If you don't, then the best you can do is ignore the feedback you receive; the worst is that you'll damage your psyche every time you read them. I'm fortunate to work in a department where I can trust. And, like Jobs, I have a surprising faith in my students' fairness and honesty. It took a few years to develop that trust and, once I did, teaching came to feel much safer.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Personal, Software Development, Teaching and Learning

August 11, 2011 7:59 PM

Methods, Names, and Assumptions in Adding New Code to a Program

Since the mid-1990s, there has been a healthy conversation around refactoring, the restructuring of code to improve its internal structure without changing its external behavior. Thanks to Martin Fowler, we have a catalog of techniques for refactoring that help us restructure code safely and reliably. It is a wonderful tool for learners and practitioners alike.

When it comes to writing new code, we are not so lucky. Most of us learn to program by learning to write new code, yet we rarely learn techniques for adding code to a program in a way that is as safe and reliable as effective as the refactorings we know and love.

You might think that adding code would be relatively simple, at least compared to restructuring a large, interconnected web of components. But how can we move with the same confidence when adding code as we do when we follow a meticulous refactoring recipe under the protection of good unit tests permits? Test-driven design is a help, but I have never felt like I had the same sort of support writing new code as when I refactor.

So I was quite happy a couple of months ago to run across J.B. Rainsberger's Adding Behavior with Confidence. Very, very nice! I only wish I had read it a couple of months ago when I first saw the link. Don't make the same mistake; read it now.

Rainsberger gives a four-step process that works well for him:

  1. Identify an assumption that the new behavior needs to break.
  2. Find the code that implements that assumption.
  3. Extract that code into a method whose name represents the generalisation you're about to make.
  4. Enhance the extracted method to include the generalisation.

I was first drawn to the idea that a key step in adding new behavior is to make a new method, procedure, or function. This is one of the basic skills of computer programming. It is one of the earliest topics covered in many CS1 courses, and it should be taught sooner in many others.

Even still, most beginners seem to fear creating new methods. Even more advanced students will regress a bit when learning a new language, especially one that works differently than the languages they know well. A function call introduces a new point of failure: parameter passing. When worried about succeeding, students generally try to minimize the number of potential points of failure.

Notice, though, that Rainsberger starts not with a brand new method, empty except for the new code to be written. This technique asks us first to factor out existing code into a new method. This breaks the job of writing the new code into two, smaller steps: First refactor, relying on a well-known technique and the existing tests to provide safety. Second, add the new code. (These are Steps 3 and 4 in Rainsberger's technique.)

That isn't what really grabbed my attention first, however. The real beauty for me is that extracting a method forces us to give it us a name. I think that naming gives us great power, and not just in programming. A lot of times, CS textbooks make a deal about procedures as a form of abstraction, and they are. But that often feels so abstract... For programmers, especially beginners, we might better focus on the fact that help us to name things in our programs. Names, we get.

By naming a procedure that contains a few lines of code, we get to say what the code does. Even the best factored code that uses good variable names tends to say how something is done, not what it is doing. Creating and calling a method separates the two: the client does what the method does, and the server implements how it is done. This separation gives us new power: to refactor the code in other ways, certainly. Rainsberger reminds us that it also gives us power to add code more reliably!

"How can I add code to a program? Write a new function." This is an unsurprising, unhelpful answer most of the time, especially for novices who just see this as begging the question. "Okay, but what do I do then?" Rainsberger makes it a helpful answer, if a bit surprising. But he also puts it in a context with more support, what to do before we start writing the new code.

Creating and naming procedures was the strongest take-home point for me when I first read this article. As the ideas steeped in my mind for a few days, I began to have a greater appreciation for Rainsberger's focus on assumptions. Novice thinkers have trouble with assumptions. This is true whether they are learning to program, learning to read and analyze literature, or learning to understand and argue public policy issues. They have a hard time seeing assumptions, both the ones they make and the ones made by other writers. When the assumptions are pointed out, they are often unsure what to do with them, and are tempted to skip right over them. Assumptions are easy to ignore sometimes, because they are implicit and thus easy to lose track of when deep in a argument.

Learning to understand and reason about assumptions is another important step on the way to mature thinking. In CS courses, we often introduce the idea of preconditions and postconditions in Data Structures. (Students also see them in a discrete structures course, but there they tend to be presented as mathematical tools. Many students dismiss their value out of hand). Writing pre- and postconditions for a method is a way to make assumptions in your program explicit. Unfortunately, most beginning don't yet see the value in writing them. They feel like an extra, unnecessary step in a process dominated by the uncertainty they feel about their code. Assuring them that these invariants help is usually like pushing a rock up a hill. Tomorrow, you get to do it again.

One thing I like about Rainsberger's article is that it puts assumptions into the context of a larger process aimed at helping us write code more safely. Mathematical reasoning about code does that, too, but again, students often see it as something apart from the task of programming. Rainsberger's approach is undeniably about code. This technique may encourage programmers to begin thinking about assumptions sooner, more often, and more seriously.

As I said, I haven't seen many articles or books talk about adding code to a program in quite this way. Back in January, "Uncle Bob" Martin wrote an article in the same spirit as this, called The Transformation Priority Premise. It offers a grander vision, a speculative framework for all additions to code. If you know Uncle Bob's teachings about TDD, this article will seem familiar; it fits quite nicely with the mentality he encourages when using tests to drive the growth of a program. While his article is more speculative, it seems worthy or more thought. It encourages the tiniest of steps as each new test provokes new code in our program. Unfortunately, it takes such small steps that I fear I'd have trouble getting my students, especially the novices, to give it a fair try. I have a hard enough time getting most students to grok the value of TDD, even my seniors!

I have similar concerns about Rainsberger's technique, but his pragmatism and unabashed focus on code gibes me hope that it may be useful teaching students how to add functionality to their programs.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

August 09, 2011 8:35 PM

(No) Degree Required?

I ran across an ad today for a part-time position to help promote a local private school. The ad ended, "Marketing degree required."

As surprising as it may seem for a university professor, my first thought was, "Why require a university degree? What you need is a person with interesting ideas and a plan."

I quickly realized that I was being a little rash. There is certainly value in holding a degree, for knowledge and practices learned while completing it. Maybe I was being unfair to marketing in particular? Having studied accounting as well as CS as an undergrad, I took a little marketing, too, and wasn't all that impressed. But that is surely the assessment of someone who is mostly ignorant about the deep content of that discipline. (I do, however, know enough to think that the biggest part of a marketing degree ought to be based in psychology, both of individuals and groups.)

Would I have reacted the same way if the ad had said, "Computer Science degree required"? Actually, I think so. Over the last few years, I have grown increasingly cynical about the almost unthinking use of university degrees as credentials.

As department head and as teacher, I frequently run into non-traditional students who are coming back from the tech industry to earn the CS degrees they don't have but need to advance in their careers, or even to switch jobs or companies in lateral moves. They are far more experienced than our new graduates, and often more ready for any job than our traditional students. A degree can, in fact, be useful for most of these students. They are missing the CS theory that lies at the foundation of the discipline, and often they lack the overarching perspective that ties it all together. An undergrad degree in CS can give them that and help them appreciate the knowledge they already have even more.

But should they really need an undergraduate degree to get a programming job after 10, 15, even 25 years in the field? In the end, it's about what you can do, not what you know. Some of these folks can do plenty -- and know plenty, to boot. They just don't have a degree.

I figure that, all other things being equal, our grads ought to have a big advantage when applying for a job in our field, whether the job requires a degree or not. Unless a job demands a skill we can't or won't give them, say, x years experience in a particular industry language, then our grads should be ready to tackle almost any job in the industry with a solid background and an ability to learn new languages, skills, and practices quickly and well.

If not, then we in the university are doing something wrong.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 29, 2011 8:56 PM

A Teacher Learns from Coaches -- Run to the Roar

A Teacher Learns from Coaches -- Run to the Roar

For what is genius, I ask you,
but the capacity to be obsessed?

Gustav Detter, a 2009 Trinity College squash player

One thing about recovering from knee surgery: it gives you lots of time to read. In between bouts of pain, therapy, and sleep, I have been reading newspapers, magazines, and a few books lying around the house, including the rest of a Dave Barry book and the excellent but flawed law school memoir One L. Right now I am enjoying immensely re-reading Isaac Asimov's Foundation trilogy. (Psychohistory -- what a great computational challenge!)

The most unusual book I've read thus far has been Run to the Roar. This is not great literature, but it is an enjoyable read. It draws its power to attract readers from perhaps the most remarkable sports streak in history at any level: the men's squash team at Trinity College has won thirteen consecutive national titles and has not lost a single match during the streak. The thread that ties the book together as a story is the tale of the eleventh national championship match, in 2009 against Princeton University. This match is considered by many to be the greatest collegiate squash match of all time, won 5-4 by Trinity in see-saw fashion. Six of the nine matches went the full five games, with three of those requiring comebacks from 0-2 down, and only one match ended in straights. Two great teams, eighteen great players, battling to the last point. In the end, Trinity survived as much as it won.

I didn't know much about squash before the match, though I used to play a little racquetball. Fortunately, the book told me enough about the game and its history that I could begin to appreciate the drama of the match and the impressive nature of the streak. Unbelievable story, really.

But the book is also about its co-author, Trinity coach Paul Assaiante, both his life and his coaching methods. The book's subtitle is "Coaching to Overcome Fear", which captures in one phrase Assaiante's approach as well as any could. He works to help his players play in the moment, with no thoughts of external concerns weighing on their minds; enjoying the game they love and the privilege they have to test their preparation and efforts on the court in battle.

Assaiante views himself as a teacher, which makes what he says and the way he says it interesting to the teacher in me. There were many passages that struck a chord with me, whether as an "I have that pattern" moment or as an idea that I might try in my classroom. In the end, I saved two passages for more thought.

The first is the passage that leads off this entry. Assaiante attributed it to Steven Millhauser. I had never heard the quote, so I googled it. I learned that Millhauser is an award-winning author. Most hits took me pages with the quote as the start of a longer passage:

For what is genius, I ask you, but the capacity to be obsessed? Every normal child has that capacity; we have all been geniuses, you and I; but sooner or later it is beaten out of us, the glory fades, and by the age of seven most of us are nothing but wretched little adults.

What a marvelous pair of sentences. It's easy to see why the sentiment means so much to Assaiante. His players are obsessive in their training and their playing. Their coach is obsessive in his preparation and his coaching. (The subtitle of one of the better on-line articles about Trinity's streak begins "Led by an obsessive coach...".)

My favorite story of his coaching obsessiveness was how he strives to make each practice different -- different lengths, different drills, different times of day, different level of intensity, and so on. He talks of spending hours to get each practice ready for the team, ready to achieve a specific goal in the course of a season aimed at the national championship.

Indeed, Assaiante is seemingly obsessive in all parts of his life; the book relates how he conquered several personal and coaching challenges through prolonged, intense efforts to learn and master new domains. One of the sad side stories of Run to the Roar explores whether Assaiante's obsessiveness with coaching squash contributed to the considerable personal problems plaguing his oldest child.

Most really good programmers are obsessive, too -- the positive compulsive, almost uncontrollable drive that sticks with a thorny problem until it is solved, that tracks a pernicious bug until it is squashed. Programming rewards that sort of single-mindedness, elevating it to desirable status.

I see that drive in students. Some have survived the adults and schools that seem almost aimed at killing children's curiosity and obsessiveness. My very best students have maintained their curiosity and obsessiveness and channeled them positively into creative careers and vocations.

The best teachers are obsessive, too. The colleagues I admire most for their ability to lead young minds are some of the most obsessive people I know. They, too, seem to have channeled their obsessiveness well, enabling them to lead well-adjusted lives with happy, well-adjusted spouses and children -- even as they spend hours poring over new APIs, designing and solving new assignments for their students, and studying student code to find the key thing missing from their lectures, and then making those lectures better.

(As an aside, the Millhauser quote comes from his novel, "Edwin Mullhouse: The Life and Death of an American Writer 1943-1954 by Jeffrey Cartwright", a book purportedly written by a seven-year-old. I read a couple of reviews such as this one, and I'm not sure whether I'll give it a read or not. I am certainly intrigued.)

The second passage I saved from Assaiante's book comes from Jack Barnaby, Harvard's legendary squash and tennis coach:

The greatest limitation found in teachers is a tendency for them to teach the game the way they play it. This should be avoided. A new player may be quite differently gifted, and the teacher's personal game may be in many ways inappropriate to the pupil's talents. A good teacher assesses the mental and physical gifts of his pupil and tries to adapt to them. There is no one best way to play the game.

(I think this comes from Barnaby's out-of-print Winning Squash Racquets, but I haven't confirmed it.)

One of the hardest lessons for me to learn as a young teacher was not to expect every student to think, design, or code like me. For years I struggled and probably slowed a lot of my students' learning, as they either failed to adapt to my approach or fought me. Ironically, the ones most likely to succeed in spite of me were the obsessive ones, who managed to figure things out on their own by sheer effort!

Eventually I realized that being more flexible wasn't dumbing down my course but recognizing what Barnaby knew: students may have their own abilities and skills that are quite different from mine. My job is to help them maximize their abilities as best I can, not make them imitate me. Sometimes that means helping them to change, perhaps helping them recognize the need to change, but never simply to force them into the cookie cutter of what works well for me.

Sometimes I envy coaches, who usually work with a small cadre of student-athletes for an entire year, with most or all of them studying under the coach for four years. This gives the coach time to "assess the mental and physical gifts of his pupils and try to adapt to them". I teach courses that range from 10 to 40 students in size, two a year, and my colleagues teach six sections a year. We are lucky to see some students show up multiple times over the course of their time at the university, but it is with only a select few that I have the time and energy to work with individually at that level. I so try to assess the collective gifts, interests, and abilities of each class and adapt how and what I teach them as best as I am able.

In the end, I enjoyed all the threads running through Run to the Roar. I'm still intrigued by the central lesson of learning to "run to the roar", to confront our fears and see how feeble what we fear often turns out to be. I think that a lot of college students are driven by fear more than we realize -- by fear of failing in a tough major, fear of disappointing their parents, fear of not understanding, or appearing unintelligent, or not finding a career that will fulfill and sustain them. I have encountered a few students over the years in whom I came to see the fear holding them back, and on at least some occasions was able to help them face those fears more courageously, or at least I hope so.

Having read this book, I hope this fall to be more sensitive to this potential obstacle to learning and enjoyment in my class, and to be more adaptable in trying to get over, through, or around it.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

July 18, 2011 7:48 AM

Leaders, Teachers, and Imprinting

Last week, I wrote an entry on managers and communication motivated by some of the comments John Lilly, formerly of Mozilla, made in an interview with Fast Company. In the same interview, Lilly mentioned teaching in a couple of ways that made me think about that part of my job.

... what's the difference between leadership and management?

For me, leadership is imagining the world that you want and figuring out how to go make it that way and how to get other people to help you. That happens sort of all up and down the spectrum of people. Teachers do that every day.

There is certainly an element of leadership in how our best teachers reach students. The K-12 education world talks a lot about this sort of thing, but that discussion often seems to lack much connection to the content of the discipline being taught. University educators work at the intersection of instruction and leadership in a way that other teachers don't. To me, this makes college teaching more interesting and sometimes more challenging. As with so many other things, simply recognizing this is a great first step toward doing a better job. In what ways am I as an instructor of, say, programming languages a leader to my students?

One part of the answer surfaces later in the article (emphasis added):

I've been interviewing a lot people for jobs ... lately. I've been struck by how strongly their first job imprints them. People who went to Google out of school have a certain way of talking and thinking about the world, people who went to Amazon, have a different way thinking about it, Facebook a different way. ...

... You just start to see patterns. You say, "Oh, that's an Amazon construct," or "that's totally a Googley way to look at the world." ...

From an organizational leadership point of view, you should think hard about what your organization is imprinting on people. Your company, hopefully, will be huge. But what you imprint on people and the diaspora that comes out of your company later may or may not be an important and lasting legacy.

Most companies probably think about what they do as about themselves; they are creating an organization. The tech start-up world has reminded us just how much cross-pollination there can be in an industry. People start their careers at a new company, learn and grow with the company, and then move on to join other companies or to start new ones. The most successful companies create something of a diaspora.

Universities are all about diaspora. Our whole purpose is to prepare students to move on to careers elsewhere. Our whole purpose is to imprint a way of thinking on students.

At one level, most academics don't really think in this way. I teach computer science. I'm not "imprinting" them; I am helping them learn a set of ideas, skills, and practices. It's all about the content, right?

Of course, it's not quite so simple. We want our graduates to know some things and be able to do some things. The world of CS is large, and in a four-year undergrad program we can expose our students to only a subset of that world. That choice is a part of the imprint we make. And the most important thing we can leave with our students is an approach to thinking and learning that allows them to grow over the course of a long career in a discipline that never sits still. That is a big part of the imprint we make on our students, too.

As members of a computer science faculty, we could think about imprinting as an organization in much the way Lilly discusses. Can someone say, "That's a totally UNI Computer Science way of thinking"? If so, what is that way of thinking? What would we like for it mean? Are our alumni and their employers served well by how we imprint our students?

As a department head, I have a chance to talk to alumni and employers frequently. I usually hear good things, but that's not so surprising. Our department might be able to improve the job we do by thinking explicitly up front about our imprint we hope to have on students, a form of starting at the end and working backwards.

As a teacher, I often think about how students approach problems after they have studied with me. Thinking about the idea of my "imprint" on them is curious. Can someone say, "That's a totally Professor Wallingford way of thinking"? If so, would that be good thing? How so? If so, what does "totally Wallingford way of thinking" mean to my students now? What would I like for it mean?

This line of thinking could be useful to me as I begin to prepare my fall course on programming languages. Without thinking very long, I know that I want to imprint on my students a love for all kinds of languages and an attitude of openness and curiosity toward learning and creating languages. What more?


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Teaching and Learning

July 15, 2011 4:25 PM

The Death of Bundling in University Education?

Clay Shirky's latest piece talks a bit about the death of bundling in journalism, in particular in newspapers. Bundling is the phenomenon of putting different kinds of content into a single product and selling consumers the whole. Local newspapers contain several kinds of content: local news coverage, national news, sports, entertainment, classified ads, obituaries, help columns, comics, .... Most subscribers don't consume all this content and won't pay for it all. In the twentieth century, it worked to bundle it all together, get advertisers to buy space in the package, and get consumers to buy the whole package. The internet and web have changed the game.

As usual, Shirky talking about the state and future of newspapers sets me to thinking about the state and future of universities [ 1, 2, 3, 4 ]. Let me say upfront that I do not subscribe to the anti-university education meme traversing the internet these days, which seems especially popular among software people. Many of its proponents speak too glibly about a world without the academy and traditional university education. Journalism is changing, not disappearing, and I think the same will be true of universities. The questions are, How will universities change? How should they change? Will universities be pulled downstream against there will, or will they actively redefine their mission and methods?

I wonder about the potential death of bundling in university. Making an analogy to Shirky's argument helps us to see some of the dissatisfaction with universities these days. About newspapers, he says:

Writing about the Dallas Cowboys in order to take money from Ford and give it to the guy on the City Desk never made much sense.

It's not hard to construct a parallel assertion about universities:

Teaching accounting courses in order to take money from state legislatures and businesses and give it to the humanities department never made much sense.

Majors that prepare students for specific jobs and careers are like the sports section. They put students in the seats. States and businesses want strong economies, so they are willing to subsidize students' educations, in a variety of ways. Universities use part of the money to support higher-minded educational goals, such as the liberal arts. Everyone is happy.

Well, they were in the 20th century.

The internet and web have drastically cut the cost of sharing information and knowledge. As a result, they have cut the cost of "acquiring" information and knowledge. When the world views the value of the bundle as largely about the acquisition of particular ingredients (sports scores or obituaries; knowledge and job skills), the business model of bundling is undercut, and the people footing most of the bill (advertisers; states and businesses) lose interest.

In both cases, the public good being offered by the bundle is the one most in jeopardy by unbundling. Cheap and easy access to targeted news content means that there is no one on the production side of the equation to subsidize "hard" news coverage for the general public. Cheap and easy access to educational material on-line erodes the university's leverage for subsidizing its public good, the broad education of a well-informed citizenry.

Universities are different from newspapers in one respect that matters to this analogies. Newspapers are largely paid for by advertisers, who have only one motivation for buying ads. Over the past century, public universities have largely been paid for by state governments and thus the general public itself. This funder of first resort has an interest in both the practical goods of the university -- graduates prepared to contribute to the economic well-being of the state -- and the public goods of the university -- graduates prepared to participate effectively in a democracy. Even still, over the last 10-20 years we have seen a steep decline in the amount of support provided by state governments to so-called "state universities", and elected representatives seem to lack the interest or political will to reverse the trend.

Shirky goes on to explain why "[n]ews has to be subsidized, and it has to be cheap, and it has to be free". Public universities have historically had these attributes. Well, few states offer free university education to their citizens, but historically the cost has been low enough that cost was not an impediment to most citizens.

As we enter a world in which information and even instruction are relatively easy to come by on-line, universities must confront the same issues faced by the media: the difference between what people want and what people are willing to pay for; the difference between what the state wants and what the state is willing to pay for. Many still believe in the overarching value of a liberal arts component to university education (I do), but who will pay for it, require it, or even encourage it?

Students at my university have questioned the need to take general education courses since before I arrived here. I've always viewed helping them to understand why as part of the education I help to deliver. The state was paying for most of their education because it had an interest in both their economic development and their civic development. As the adage floating around the Twitter world this week says, "If you aren't paying for the product, you are the product." Students weren't our customers; they are our product.

I still mostly believe that. But now that students and parents are paying the majority of the cost of the education, a percentage that rises every year, it's harder for me to convince them of that. Heck, it's harder for me to convince myself of that.

Shirky says other things about newspapers that are plausible when uttered about our universities as well, such as:

News has to be subsidized because society's truth-tellers can't be supported by what their work would fetch on the open market.

and:

News has to be cheap because cheap is where the opportunity is right now.

and:

And news has to be free, because it has to spread.

Perhaps my favorite analog is this sentence, which harkens back to the idea of sports sections attracting automobile dealers to advertise and thus subsidize the local government beat (emphasis added:

Online, though, the economic and technological rationale for bundling weakens -- no monopoly over local advertising, no daily allotment of space to fill, no one-size-fits-all delivery system. Newspapers, as a sheaf of unrelated content glued together with ads, aren't just being threatened with unprofitability, but incoherence.

It is so very easy to convert that statement into one about our public universities. We are certainly being threatened with unprofitability. Are we also being threatened with incoherence?

Like newspapers, the university is rapidly finding itself in need of a new model. Most places are experimenting, but universities are remarkably conservative institutions when it comes to changing themselves. I look at my own institution, whose budget situation calls for major changes. Yet it has been slow, at times unwilling, to change, for a variety of reasons. Universities that depend more heavily on state funding, such as mine, need to adapt even more quickly to the change in funding model. It is perhaps ironic that, unlike our research-focused sister schools, we take the vast majority of our students from in-state, and our graduates are even more likely to remain in the state, to be its citizens and the engines of its economic progress.

Shirky says that we need the new news environment to be chaotic. Is that true of our universities as well?


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 07, 2011 12:58 PM

Mere Skills

Last week I ran into this quote attributed to Einstein:

The formulation of a problem is often more essential than its solution, which may be merely a matter of mathematical or experimental skill.

I don't like "merely" here, because it diminishes the value of "mere" mathematical or experimental skills. I understand why problem formulation is so important, usually more important than problem solution. For one thing, it is hard to solve a problem you have not formulated yet. For another, problem formulation is often more difficult than solving the problem. As a result, as skills go, problem formulation is the scarcer resource.

But problem-solving skill matters. Most of us need to be able to execute, too. And, at some point, you gotta know stuff -- and be able to do the little things.

Maybe if you are Einstein, you can get by without the skills you need merely to solve the problem. If you can discover ideas like relativity, you can probably find grad students to turn the crank. But the rest of us usually need those skills.

(I'm even skeptical about Einstein. I've heard stories, perhaps apocryphal, perhaps exaggerated, about Einstein's lack of mathematical skills. But even if there is a grain of truth in the stories, I suspect that it is all relative. There is a big difference between the math skills one needs to work out the theory of relativity and the math skills one needs to do the kind of work most scientists do day-to-day.)

One important reminder we get from the quote is that there are two skills, problem formulation and problem solution. They are different. We should learn how to do both. They require different kinds of preparation. One can learn many problem-solving skills through practice, practice, practice: repetition trains our minds. Problem formulation skills generally require a more reflection and thinking about. Lots of experience helps, of course, but it's harder to get enough practice to learn how to tame problems through only repetition.

For most of us and most domains, mastering problem-solving skills is a useful, if not necessary, precursor to developing problem formulation skills. While developing our problem-solving skills, we get a lot of repetition with the syntax of semantics of the domain. This volume of experience prepares our brain to work in the domain. It also give our brains -- engines capable of remarkable feats of association -- begins to make connections, despite our own inattention to the bigger picture. Our brains are doing a lot of work while we are "just" solving problems.

Then we need to take that raw material and work on learning how to formulate problems, deliberately. In that, I agree with Einstein.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 03, 2011 1:16 PM

A Few Percent

Novak Djokovic

I am a huge tennis fan. This morning, I watched the men's final at Wimbledon and, as much as I admire Roger Federer and Raphael Nadal for the games and attitudes, I really enjoyed seeing Novak Djokovic break through for his first title at the All-England Club. Djokovic has been the #3 ranked player in the world for the last four years, but in 2011 he has dominated, winning 48 of 49 matches and two Grand Slam titles.

After the match, commentator and former Wimbledon champion John McEnroe asked Djokovic what he had changed about his game to become number one. What was different between this year and last? Djokovic shrugged his shoulders, almost imperceptibly, and gave an important answer:

A few percent improvement in several areas of my game.

The difference for him was not an addition to his repertoire, a brand new skill he could brandish against Nadal or Federer. It was a few percentage points' improvement in his serve, in his return, in his volley, and in his ability to concentrate. Keep in mind that he was already the best returner of service in the world and strong enough in the other elements of his game to compete with and occasionally defeat two of the greatest players in history.

That was not enough. So he went home and got a little better in several parts of his game.

Indeed, the thing that stood out to me from his win this morning against Rafa was the steadiness of his baseline play. His ground strokes were flat and powerful, as they long have been, but this time he simply hit more balls back. He made fewer errors in the most basic part of the game, striking the ball, which put Nadal under constant pressure to do the same. Instead of making mistakes, Djokovic gave his opponent more opportunities to make mistakes. This must have seemed especially strange to Nadal, because this is one of the ways in which he has dominated the tennis world for the last few years.

I think Djokovic's answer is so important because it reminds us that learning and improving our skills are often about little things. We usually recognize that getting better requires working hard, but I think we sometimes romanticize getting better as being about qualitative changes in our skill set. "Learn a new language, or a new paradigm, and change how you see the world." But as we get better this becomes harder and harder to do. Is there any one new skill that will push Federer, Nadal, or Djokovic past his challengers? They have been playing and learning and excelling for two decades each; there aren't many surprises left. At such a high level of performance, it really does come down to a few percent improvement in each area of the game that make the difference.

Even for us mortals, whether playing tennis or writing computer programs, the real challenge -- and the hardest work -- often lies in making incremental improvements to our skills. In practicing the cross-court volley or the Extract Class refactoring thousands and thousands of times. In learning to concentrate a little more consistently when we tire by trying to concentrate a little more consistently over and over.

As Nadal said in his own post-game inteview, the game is pretty simple. The challenge is to work hard and learn how to play it better.

Congratulations to Novak Djokovic for his hard work at getting a few percent better in several areas of his game. He has earned the accolade of being, for now, the best tennis player in the world.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 01, 2011 2:29 PM

What Would Have To Be True...

Yesterday, I read Esther Derby's recent post, Promoting Double Loop Learning in Retrospectives, which discusses ways to improve the value of our project retrospectives. Many people who don't do project retrospectives will still find Derby's article useful, because it's really about examining how we think and expanding possibilities.

One of the questions she uses to jump start deeper reflection is:

What would have to be true for [a particular practice] to work?

This is indeed a good question to ask when we are trying to make qualitative changes in our workplaces and organizations, for the reasons Derby explains. But it is also useful more generally as a communication tool.

I have a bad personal habit. When someone says something that doesn't immediately make sense to me, my first thought is sometimes, "That doesn't make sense." (Notice the two words I dropped...) Even worse, I sometimes say it out loud. That doesn't usually go over very well with the person I'm talking to.

Sometime back in the '90s, I read in a book about personal communication about a technique for overcoming this disrespectful tendency, which reflects a default attitude. The technique is to train yourself to think a different first thought:

What would have to be true in order for that statement to be true?

Rather than assume that what the person says is false, assume that it is true and figure out how it could be true. This accords my partner the respect he or she deserves and causes me to think about the world outside my own point of view. What I found in practice, whether with my wife or with a professional colleague, was that what they had said was true -- from their perspective. Sometimes we were starting from different sets of assumptions. Sometimes we perceived the world differently. Sometimes I was wrong! By pausing before reacting and going on the defensive (or, worse, the offensive), I found that I was saving myself from looking silly, rash, or mean.

And yes, sometimes, my partner was wrong. But now my focus was not on proving his wrong but on addressing the underlying cause of his misconception. That led to a very different sort of conversation.

So, this technique is not an exercise in fantasy. It is an exercise in more accurate perception. Sometimes, what would have to be true in the world actually is true. I just hadn't noticed. In other cases, what would have to be true in the world is how the other person perceives the world. This is an immensely useful thing to know, and it helps me to respond both more respectfully and more effectively. Rather than try to prove the statement false in some clinical way, I am better served by taking one of two paths:

  • helping the other person perceive the world more clearly, when his or her perception clashes with reality, or
  • recognizing that the world is more complicated than I first thought and that, at least for now, I am better served by acting from a state of contingency, in a world of differ possible truths.

I am still not very good at this, and occasionally I slip back into old habits. But the technique has helped me to be a better husband as well as a better colleague, department head, and teacher.

Speaking as a teacher: It is simply mazing how different interactions with students can be when, after students say something that seems to indicate they just don't get it, "What would have to be true in order for that statement to be true?" I have learned a lot about student misconceptions and about the inaccuracy of the signals I send students in my lectures and conversations just by stepping back and thinking, "What would have to be true..."

Sometimes, our imaginations are too small for our own good, and we need a little boost to see the world as it really is. This technique gives us one.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

June 28, 2011 4:11 PM

Starting at the End and Working Backwards

A couple of weeks ago, many people were discussing this response by Jeff Bezos to a question at Amazon's shareholder meeting, in a nutshell, "I don't see the company failing much these days. Is it taking enough risks?" Understandably, most of the discussion was about Bezos's description of how Amazon's corporate culture supports long-term vision and incremental innovation. Again, in a nutshell, "We are stubborn on vision. We are flexible on details."

But the passage that jumped out to the faculty member in me was this one:

We start with the customer and work backwards. And, very importantly, we are willing to be misunderstood for long periods of time.

First let me say that I do not believe that students are the "customers" of a university or academic department. They are a strange mix of many things, including product, collaborator, and customer. Still, the idea of starting with the student and working backwards strikes me as an intriguing but uncommon way for faculty to think about their curricula and courses. We talk a lot these days about student outcomes assessment, which can be a useful tool for accountability and continuous feedback in curriculum design but which is usually treated as a chore added on after the fact to courses we think are best.

Even when faculties do start with the student, they tend to start at the beginning -- "the basics" -- and design their first-year courses around what they think their students need to know for the rest of the program. The real starting point is the body of knowledge that we think constitutes the discipline. We design courses around the topics of the discipline and, to the extent we think of students, we think of how to teach the basic ideas and skills they need to master those topical areas.

The above is a generalization, both of how the faculties I have been a part of seem to work and of how the faculties my colleagues describe to me seem to work. But I do not think that it is so inaccurate as to be not useful.

So that is the context in which I thought about Bezos's remark and began to think. What if we start with what we would like for our students to know and be able to do on graduation day, and work backwards? Start curriculum design not with CS 1 but with a capstone project course. What will students be able to do in that course if we have done a good job preparing them? Create one or more courses that prepare them for the project. Recurse.

Yes, I know, education is about more than concrete skills, and it is more complicated than stacking one block on top of another. I am just trying think outside of the self-imposed constraints that usually hem us in academia and see where we might go.

I have written about something similar before, Dave West's and Pam Rostal's vision of competency-based curriculum design as presented at the OOPSLA 2005 Educators' Symposium and elaborated in a ChiliPLoP 2008 hot topic. But I don't know about any schools have truly started at the endpoint and worked backward. If you so, please let me know.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 27, 2011 5:03 PM

"You can code. That is pretty damn cool."

I've been off-line a lot lately, doing physical therapy for my knee and traveling a bit. That means I have a lot of fun reading to catch up on! One page that made the rounds recently is Advice From An Old Programmer, from Zed Shaw's intro book, Learn Python The Hard Way. Shaw has always been a thoughtful developer and an entertaining writer with a unique take on programming. Now he has put his money where his mouth is with a book that aims to teach programming in a style he thinks most effective for learners.

I look forward to digging into the book soon, but for now his advice page has piqued a lot of interest. For example:

Programming as a profession is only moderately interesting. It can be a good job, but you could make about the same money and be happier running a fast food joint. You're much better off using code as your secret weapon in another profession.

As a matter of personal opinion, I disagree with the first sentence, and could never make the switch discussed in the second. But I do think that the idea of programming as a secret weapon in other professions has a lot to offer people who would never want to be computer scientists or full-time software developers. It's a powerful tool that frees you from wishing you have a programmer around. It changes how you can think about problems in your discipline and lets you ask new questions.

Finally, Shaw tells his readers not too worry when non-programmers treat them badly because they are now nerds who can program. He gives good reasons why you shouldn't care about such taunts, and then sums it up in a Zed Shaw-like killer closing line:

You can code. They cannot. That is pretty damn cool.

Amen.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 17, 2011 12:28 PM

Failure and the Liberal Arts

Many people are talking about Conan O'Brien's recent commencement address at Dartmouth, in which he delivered vintage Conan stand-up for fifteen minutes and a thoughtful, encouraging, and wise message about failure. We talk about the virtues of failure in many contexts, including start-ups, agile software development, and learning. O'Brien reminds us that failure hurts. It makes us question our dreams and ourselves. But out of the loss can come conviction, creation, and re-creation. Indeed, it is in failing to achieve the ideals we set for ourselves that ends up making us who we are. Your dream will change. That's okay.

If you haven't seen this speech, check it out. It really is quite good, both entertaining and educational. If you are not particularly a fan of O'Brien's stand-up, you can skip to 15:40 or even 16:15 to get to the important message at its heart.

I've been thinking about failure and liberal arts colleges in New England in recent days, as my daughter prepares to head off for the latter with a little fear of the former. So this talk meant a lot to me. She isn't sure yet what she wants to major in or do for a living. This has been tough, because she has felt subtle pressure from a lot of people that she should have a big dream, or at least have a specific goal to work toward. But she likes so many things and isn't ready to specialize yet.

So she went looking for a liberal arts college. Then she hears a lot about unemployed English grads, students who lack practical job skills, and 20-somethings with crushing loan debts and no prospect of paying them off. That's where the fear comes in...

But I think people are making a fallacious connection between undergraduate education and professional prospects. First of all, a student can go to school with a particular job path in mind, amass huge debt, and enter a profession that doesn't pay well enough to pay it off. I saw news articles in the last year that talked about problems some grads have faced with degrees in social work and counseling psychology. There is nothing wrong with these degrees per se, but the combination of low median pay and debt amassed even at public schools can be deadly.

Second, and perhaps more important, many people seem to misunderstand the nature of a liberal education. They think it means studying only "soft" academic disciplines in the humanities, such as literature, history, and philosophy. Maybe that is what most people mean by the term, but I think about it more broadly as the freedom to read and study widely. Liberal arts majors are not limited to studying only in the humanities. They can study literature and also economics, chemistry, and international relations. They can study languages and also political science and a little math; history and also graphic design. They could even learn a little computer programming.

The sciences are part of a liberal education. I think CS can be, too. And the small size of many liberal arts majors gives students the freedom to sample broadly across the spectrum of human knowledge and skills.

The danger of a liberal arts education is that some students and professors take it as license to study only in the humanities. But the ultimate value of a liberal arts education lies not in that narrow circle, as valuable and rewarding as it can be in its own right. The value lies in intersections: the ability to understand them, the ability to recognize them, and the ability to work in them. It is most desirable to learn something about a lot of different things, even real problems and real solutions in the modern world. Put together with a few key skills, the combination is powerful.

Just as it's important not to be too narrowly trained, it's important not to be too narrowly "liberally educated".

So I've encouraged my daughter not to worry about her lack of narrow focus just yet. She has a lot to learn yet, most importantly about the challenging problems that will vex humanity in the coming century. Many of them lie at the intersection of several disciplines, and solving them will be the responsibility of well-prepared minds.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

May 24, 2011 12:38 PM

Difficult Students, Difficult Professors, Difficult Relationships

We professors usually write glowingly of our students. Writing bad things about students in public seems like a bad idea. Besides, we mean the good things we say. By and large students are great people to be around. We get to learn with them watch them grow. Yet...

Yesterday, I tweeted out of an emotion I occasionally feel when I read my student evaluations after a semester ends: even if n-1 students say positive things and offer constructive suggestions for improvement, my mind focuses on the one student who was unhappy and complained unhelpfully. It's just the ego at work; objectively, every instructor realizes that whenever a bunch of students gather in one place, it is likely that a few will be unhappy. It's unrealistic -- foolish, really -- to think that everyone should like you.

Fortunately, after a few minutes (or hours, or days, if you haven't yet trained your mind in this discipline yet), the feeling passes and you move forward, learning from the assessment and improving the course.

Occasionally, the negative comments are not a random event. In this case, I'm pretty sure I know who was unhappy. This student had felt similarly in previous semesters. He or she is just not a fan of mine.

If we are all honest with ourselves and each other, we have to admit that the same is true for us professors. Occasionally, we encounter a student who rubs us the wrong way. It is rare, perhaps surprisingly so, but every few years I encounter a student of whom I am not a big fan. Sometimes the feeling is mutual, but not always. Occasionally, I have students who don't like me much but whom I like well enough, or students who rub me the wrong way but seem to like me fine. The good news is that, even in these circumstances, students and professors alike do a pretty good of working together professionally. For me, it's a point of professional pride not to let how I feel about any student, positive or negative, affect my courses.

I almost titled this post "Difficult Students", but that struck me as wrong. From the student's perspective, this is about difficult instructors. And it's not really about students and instructors at all, at least most of the time. Other students enjoy my courses even when one does not; other faculty like and enjoy the students who aren't my favorites. It's about relationships, one-on-one.

And, as I wrote in the George Costanza post linked above, this is to be expected. We are all human.

~~~~

In response to my tweet, David Humphrey shared this comic to help ease the psychological trauma of even one negative student:

Haters gonna hate

(If you prefer an analgesic with a harder edge, I offer you Gaping Void's take on the matter.)


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

May 20, 2011 11:55 AM

Learning From Others

I've been reading through some of the back entries in Vivek Haldar's blog and came across the entry Coding Blind. Haldar notes that most professionals and craftsmen learn their trade at least in part by watching others work, but that's not how programmers learn. He says that if carpenters learned the way programmers do, they'd learn the theory of how to hammer nails in a classroom and then do it for the rest of their careers, with every other carpenter working in a different room.

Programmers these days have a web full of open-source code to study, but that's not the same. Reading a novel doesn't give you any feel at all for what writing a novel is like, and the same is true for programming. Most CS instructors realize this early in their careers: showing students a good program shows them what a finished program looks like, but it doesn't give them any feel at all for what writing a program is like. In particular, most students are not ready for the false starts and the rewriting that even simple problems will cause them.

Many programming instructors try to bridge this gap by writing code live in class, perhaps with student participation, so that students can experience some of the trials of programming in a less intimidating setting. This is, of course, not a perfect model; instructors tend not to make the same kind of errors as beginners, or as many, but it does have some value.

Haldar points out one way that other kinds of writers learn from their compatriots:

Great artists and writers often leave behind a large amount of work exhaust other than their finished masterpieces: notebooks, sketches, letters and journals. These auxiliary work products are as important as the finished item in understanding them and their work.

He then says, "But in programming, all that is shunned." This made me chuckle, because I recently wrote a bit about my experience having students maintain engineering notebooks for our Intelligent Systems course. I do this so that they have a record of their thoughts, a place to dump ideas and think out loud. It's an exercise in "writing to learn", but Haldar's essay makes me think of another potential use of the notebooks: for other students to read and learn from. Given how reluctant my students were to write at all, I suspect that they would be even more reluctant to share their imperfect thoughts with others in the course. Still, perhaps I can find a way to marry these ideas.

cover of rpg's Writers' Workshops

This makes me think of another way that writers learn from each other, writers' workshops. Code reviews are a standard practice in software, and PLoP, the Pattern Languages of Programs conference, has adapted the writers' workshop form for technical writers. One of the reasons I like to teach certain project courses in a studio format is that it gives all then teams an opportunity to see each other's work and to talk about design, coding, and anything else that challenges or excites them. Some semesters, it works better than others.

Of course, a software team itself has the ability to help its members learn from one another. One thing I noticed more this semester than in the past was students commenting that they had learned from their teammates by watching them work. Some of the students who said this viewed themselves as the weakest links on their teams and so saw this as a chance to approach their more accomplished teammates' level. Others thought of themselves as equals to their teammates yet still found themselves learning from how others tackled problems or approached learning a new API. This is a team project succeeding as we faculty hope it might.

Distilling experience with techniques in more than just a finished example or two is one of the motivations for the software patterns community. It's one of the reasons I felt so comfortable with both the literary form and the community: its investment in and commitment to learning from others' practice. That doesn't operate at quite the fundamental level of watching another carpenter drive a nail, but it does strike close to the heart of the matter.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 12, 2011 10:18 AM

What Students Said

Curly says, 'one thing'

On the last day of my Intelligent Systems course, I asked my students three retrospective questions. Each question asked them to identify one thing...

  • one thing you learned about AI by doing this project
  • one thing you learned about writing software by doing this project
  • one thing that makes your program "intelligent"

Question 3 is a topic for another day, when I will talk a bit about AI. Today I am thinking more about what students learned about writing software. As one of our curriculum's designated "project courses", Intelligent Systems has the goal of giving students an experience building a significant piece of software, as part of a team. What do the students themselves think they learned?

A couple of answers to the first question were of more general software development interest:

I learned that some concepts are easy to understand conceptually but difficult to implement or actually use.

I learned to be open-minded about several approaches to solving a problem. ... be prepared to accept that an approach might take a lot of time to understand and end up being [unsuitable].

There is nothing like trying to solve a real problem to teach you how hard some solutions are to implement. Neural networks were the most frequently mentioned concept that is easy to understand but hard to make work in practice. Many students come out their AI course thinking neural nets are magic; it turns out magic can be hard to serve up. I suspect this is true of many algorithms and techniques students learn over the course of their studies.

I don't recall talking about agile software development much during this course, though no doubt it leaks out in how I typically talk about writing software. Still, I was surprised at the theme running through student responses to the second question.

For example:

Design takes time. Multiple iterations, revise and test.

A couple of teams discovered spike solutions, sorta:

You may write a lot of worthless or bad code to help with the final solution. We produced a lot of bad code that was never used in the end product, but it helped us get to that point.

These weren't true spikes, because the teams didn't set out with the intention of using the code to learn. But most didn't realize that they could or should do this. Now that they know, they might behave differently in the future. Most important, they learned that it's okay to "code to learn".

Many students came to appreciate collective code ownership and tools that support it:

When writing software in a group, it is important to make your code readable: descriptive [names] and comments that describe what is going on.

I learned how to maintain a project with a repository so that each team member can keep his copy up-to-date. ... I also learned how to use testing suites.

Tests also showed up in one of my favorite student comments, about refactoring:

I learned that when refactoring even small code you need unit tests to make sure you are doing things correctly. Brute forcing only gets you into trouble and hours of debugging bad code.

Large, semester-long projects usually given students their first opportunity to experience refactoring. Living inside a code base for a while teaches them a lot about what software development is really like, especially code they themselves have written. Many are willing to accept that living with someone else's code can be difficult but believe that their own code will be fine. Turns out it's not. Most students then come to appreciate the value of refactoring techniques I need to help them learn refactoring tools better.

Finally, this comment from the first student retrospective I read captures a theme I saw throughout:

It is best to start off simple and make something work, rather than trying to solve the entire problem at once and get lost in its complexity.

This is in many ways the heart of agile software development and the source for all the other practices we find so useful. Whatever practices my own students adopt in the coming years, I hope they are guided by this idea.

~~~~

Some of you will recognize the character in the image above as Curly, the philosopher-cowboy from City Slickers. One of the great passages of that 1991 film has Curly teaching protagonist Mitch about the secret of life, "One thing. Just one thing."

I am not the first software person to use Curly as inspiration. Check out, for example, Curly's Law: Do One Thing. Atwood shows how "do one thing" is central to "several core principles of modern software development.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 10, 2011 4:32 PM

Course Post-Mortem and Project Notebooks

I'm pretty much done with my grading for the semester. All that's left is freezing the grades and submitting them.

Intelligent Systems is a project course, and I have students evaluate their and their teammates' contributions to the project. One part of the evaluation is to allocate the points their team earns on the project to the team members according to the quality and quantity of their respective contributions. As I mentioned to the class earlier in the semester, point allocations from semester to semester tend to exhibit certain features. With few exceptions:

  • Students are remarkably generous to one another, as long as the teammate makes a reasonable effort under the circumstances.
  • If anything, students tend to undervalue their own contribution.
  • The allocations are remarkably consistent across teammates on the same team.
  • The allocations are remarkably consistent with what I would assign, based on my interactions with the team over the course of the project.

All that adds up to me being rather satisfied with the grades that fall out of the grinder at the end of the semester.

One thing that has not changed since I last taught this course ten years ago or so is that most students don't like the idea of an engineer's notebook. I ask each student to maintain a record their of their notes while working on the project along with a weekly log intended to be a periodic retrospective of their work and progress, their team's work and progress, and the problems they encounter and solve along the way. Students have never liked keeping notebooks. Writing doesn't seem to be a habit we develop in our majors, and by the time they reach their last ultimate or penultimate semester, the habit of not writing is deeply ingrained.

One thing that may have changed in the last decade: students seem more surly at being asked to keep a notebook. In the past, students either did write or didn't write. This year, for the most part, students either didn't write or didn't write much except to say how much they didn't like being asked to write. I have to admire their honesty at the risk of being graded more harshly for having spoken up. (Actually, I am proud they trust me enough to still grade them fairly!) I can't draw a sound conclusion from one semester's worth of data, but I will watch for a trend in future semesters.

One thing that did change this semester: I allowed students to blog instead of maintaining a paper notebook. I was surprised that only two students took me up on the offer. Both ended up with records well above the average for the class. One of the students treated his blog a bit more formally than I think of an engineer's notebook, but the other seemed to treat much as he would have a paper journal. This was a win, one I hope to replicate in the future.

The Greeks long ago recorded that old habits die hard, if at all. In the future, I will have to approach the notebook differently, including more and perhaps more persuasive arguments for it up front and more frequent evaluation and feedback during the term. I might even encourage or require students to blog. This is 2011, after all.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 06, 2011 5:12 PM

Next Semester: A Prize for Failure

This is finals week, so my Intelligent Systems students have presented their projects and submitted their code and documentation. All that's left for some of them is to submit their project notebooks. By and large, all four teams did good job this semester, and I'm happy both with the produced and with the way they produced it.

(The notebooks may be an exception, and that means I need to do a better job convincing them to write as they think and learn throughout the semester.)

A couple of teams were disappointed that they did not accomplish as much as they had hoped. I reassured them that when we explore, we often take wrong turns. Otherwise, it wouldn't be exploration! In science, too, we sometime run experiments that fail. Still, we can learn from the experience.

This experience, coupled with a tweet I saw a week or so ago, has given me my first new idea for next semester:

a prize for the best failed idea of the semester

I teach Programming Languages in the fall, in which students learn Scheme, functional programming, and a fair bit about language interpretation. All of these are fertile areas for failure, by students and professor alike! At this early stage of planning, I think I'll announce the prize early in the semester and allow students to submit entries throughout. A strong candidate for the prize will be an idea that seemed good at the time, so the student tried it out, whether in code or some other form. After investing time and energy, the student has to undo the work, maybe even start from scratch, in order to solve the original problem.

This sounds like failure to most students, but the point of the prize is this: you can learn a lot from an idea that doesn't pan out. If students can look back on their failures and understand why it was valuable trying the ideas anyway, they will have learned something. Whether they win a prize or not, they may well end up with a funny story to tell!

Now, I need a good name for the prize. Suggestions are welcome!

I also need a prize. I've considered the possibility of giving extra credit but just about convinced myself to do something more fun and perhaps more lasting. Besides, course credit is so not the point. Giving extra credit might encourage broader participation among the students, but I believe that the number of students who care more about their grades than about learning is smaller than most people think. And the idea of offering a prize is to encourage a willingness to explore good ideas, even to cultivate a sense of adventure. Awarding class points would be like giving your best friend in the world money as a birthday gift: it misses the point.

My hope in offering such a prize is to help students move a little down the path from thinking like this:

the failure cake

to thinking like this:

[Engineers Without Borders] believes that success in development is not possible without taking risks and innovating -- which inevitably means failing sometimes. We also believe that it's important to publicly celebrate these failures, which allows us to share the lessons more broadly and create a culture that encourages creativity and calculated risk taking.

An annual report of failures! These are engineers who get it.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 27, 2011 6:12 PM

Teachers and Mentors

I'm fortunate to have good relationships with a number of former students, many of whom I've mentioned here over the years. Some are now close friends. To others, I am still as their teacher, and we interact in much the same way as we did in the classroom. They keep me apprised of their careers and occasionally ask for advice.

I'm honored to think that a few former students might think of me as their mentor.

classic image of Telemachus and Mentor

Steve Blank recently captured the difference between teacher and mentor as well as I've seen. Reflecting back to the beginning of his career, he considered what made his relationships with his mentors different from the relationship he had with his teachers, and different from the relationship his own students have with him now. It came down to this:

I was giving as good as I was getting. While I was learning from them -- and their years of experience and expertise -- what I was giving back to them was equally important. I was bringing fresh insights to their data. It wasn't that I was just more up to date on current technology, markets or trends; it was that I was able to recognize patterns and bring new perspectives to what these very smart people already knew. In hindsight, mentorship is a synergistic relationship.

In many ways, it's easier for a teacher to remain a teacher to his former students than to become a mentor. The teacher still feels a sense of authority and shares his wisdom when asked. The role played by both teacher and student remains essentially the same, and so the relationship doesn't need to change. It also doesn't get to grow.

There is nothing wrong with this sort of relationship, nothing at all. I enjoy being a teacher to some of my once and future students. But there is a depth to a mentorship that makes it special and harder to come by. A mentor gets to learn just as much as he teaches. The connection between mentor and young colleague does not feel as formal as the teacher/learner relationship one has in a classroom. It really is the engagement of two colleagues at different stages in their careers, sharing and learning together.

Blank's advice is sound. If what you need is a coach or a teacher, then try to find one of those. Seek a mentor when you need something more, and when you are ready and willing to contribute to the relationship.

As I said, it's an honor when a former student thinks of me as a mentor, because that means not only do they value my knowledge, expertise, and counsel but also they are willing to share their knowledge, expertise, and experience with me.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

April 26, 2011 4:41 PM

Students Getting There Faster

I saw a graphic over at Gaping Void this morning that incited me to vandalism:

teaching is the art of getting people to where they need to be

A lot of people at our colleges and universities seem to operate under the assumption that our students need us in order to get where they need to be. In A.D. 1000, that may have been true. Since the invention of the printing press, it has been becoming increasingly less true. With the invention of the digital computer, the world wide web, and more and more ubiquitous network access, it's false, or nearly so. I've written about this topic from another perspective before.

Most students don't need us, not really. In my discipline, a judicious self-study of textbooks and all the wonderful resources available on-line, lots of practice writing code, and participation in on-line communities of developers can give most students a solid education in software development. Perhaps this is less true in other disciplines, but I think most of us greatly exaggerate the value of our classrooms for motivated students. And changes in technology put this sort of self-education within reach of more students in more disciplines every day.

Even so, there has never been much incentive for people not to go to college, and plenty of non-academic reasons to go. The rapidly rising cost of a university education is creating a powerful financial incentive to look for alternatives. As my older daughter prepares to head off to college this fall, I appreciate that incentive even more than I did before.

Yet McLeod's message resonates with me. We can help most students get where they need to be faster than they would get there without us.

In one sense, this has always been true. Education is more about learning than teaching. In the new world created by computing technologies, it's even more important that we in the universities understand that our mission is to help people get where they need to be faster and not try to sell them a service that we think is indispensable but which students and their parents increasingly see as a luxury. If we do that, we will be better prepared for reality as reality changes, and we will do a better job for our students in the meantime.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 14, 2011 10:20 PM

Al Aho, Teaching Compiler Construction, and Computational Thinking

Last year I blogged about Al Aho's talk at SIGCSE 2010. Today he gave a major annual address sponsored by the CS department at Iowa State University, one of our sister schools. When former student and current ISU lecturer Chris Johnson encouraged me to attend, I decided to drive over for the day to hear the lecture and to visit with Chris.

Aho delivered a lecture substantially the same as his SIGCSE talk. One major difference was that he repackaged it in the context of computational thinking. First, he defined computational thinking as the thought processes involved in formulating problems so that their solutions can be expressed as algorithms and computational steps. Then he suggested that designing and implementing a programming language is a good way to learn computational thinking.

With the talk so similar to the one I heard last year, I listened most closely for additions and changes. Here are some of the points that stood out for me this time around, including some repeated points:

  • One of the key elements for students when designing a domain-specific language is to exploit domain regularities in a way that delivers expressiveness and performance.
  • Aho estimates that humans today rely on somewhere between 0.5 and 1.0 trillion lines of software. If we assume that the total cost associated with producing each line is $100, then we are talking about a most serious investment. I'm not sure where he found the $100/LOC number, but...
  • Awk contains a fast, efficient regular expression matcher. He showed a figure from the widely read Regular Expression Matching Can Be Simple And Fast, with a curve showing Awk's performance -- quite close to Thompson NFA curve from the paper. Algorithms and theory do matter.
  • It is so easy to generate compiler front ends these days using good tools in nearly every implementation language. This frees up time in his course for language design and documentation. This is a choice I struggle with every time I teach compilers. Our students don't have as strong a theory background as Aho's do when they take the course, and I think they benefit from rolling their own lexers and parsers by hand. But I'm tempted by what we could with the extra time, including processing a more compelling source language and better coverage of optimization and code generation.
  • An automated build system and a complete regression test suite are essential tools for compiler teams. As Aho emphasized in both talks, building a compiler is a serious exercise in software engineering. I still think it's one of the best SE exercises that undergrads can do.
  • The language for quantum looks cool, but I still don't understand it.

After the talk, someone asked Aho why he thought functional programming languages were becoming so popular. Aho's answer revealed that he, like any other person, has biases that cloud his views. Rather than answering the question, he talked about why most people don't use functional languages. Some brains are wired to understand FP, but most of us are wired for, and so prefer, imperative languages. I got the impression that he isn't a fan of FP and that he's glad to see it lose out in the social darwinian competition among languages.

If you'd like to see an answer to the question that was asked, you might start with Guy Steel's StrangeLoop 2010 talk. Soon after that talk, I speculated that documenting functional design patterns would help ease FPs into the mainstream.

I'm glad I took most of my day for this visit. The ISU CS department and chair Dr. Carl Chang graciously invited me to attend a dinner this evening in honor of Dr. Aho and the department's external advisory board. This gave me a chance to meet many ISU CS profs and to talk shop with a different group of colleagues. A nice treat.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

April 10, 2011 9:23 PM

John McPhee on Writing, Teaching, and Programming

John McPhee is one of my favorite non-fiction writers. He is a long-form journalist who combines equal measures of detailed fact gathering and a literary style that I enjoy as a reader and aspire to as a writer. For years, I have used selections from the The John McPhee Reader in advice to students on how to do gather requirements for software, including knowledge acquisition for AI systems.

This weekend I enjoyed Peter Hessler's interview of McPhee in The Paris Review, John McPhee, The Art of Nonfiction No. 3. I have been thinking about several bits of McPhee's wisdom in the context of both writing and programming, which is itself a form of writing. I also connected with a couple of his remarks about teaching young writers -- and programmers.

One theme that runs through the interview serves as a universal truth connecting writing and programming:

Writing teaches writing.

In order to write or to program, one must first learn the basics, low-level skills such as grammar, syntax, and vocabulary. Both writers and programmers typically go on to learn higher-level skills that deal with the structure of larger works and the patterns that help creators create and readers understand. In the programming world, we call these "design" skills, though I imagine that's too much an engineering term to appeal to writes.

Once you have these skills under your belt, there isn't much more to teach, but there is plenty to learn. We help newbies learn by sharing what we create, by reading and critiquing each others work, and by talking about our craft. But doing it -- writing, whether it's stories, non-fiction, or computer programs -- that's the thing.

McPhee learned this in many ways, not the least of which was one of the responses he received to his first novel, which he wrote in lieu of a dissertation (much to the consternation of many Princeton English professors!). McPhee said

It had a really good structure and was technically fine. But it had no life in it at all. One person wrote a note on it that said, You demonstrated you know how to saddle a horse. Now go find the horse.

He still had a lot to learn. This is a challenge for many young programmers whom I teach. As they learn the skills they need to become competent programmers, even excellent ones, they begin to realize they also need a purpose. At a miniconference on campus last week, a successful former student encouraged today's students to find and nurture their own passions. In those passions they will also find the energy and desire to write, write, write, which is the only he knew of to master the craft of programming.

Finding passion is hard, especially for students who come through an educational system that sometimes seems more focused on checking off boxes than on growing a person.

Luckily, though, finding problems to work on (or stories to write) can be much less difficult. It requires only that we are observant, that we open our eyes and pay attention. As McPhee says:

There are zillions of ideas out there--they stream by like neutrons.

For McPhee, most of the ideas he was willing to write about, spending as much as three years researching and writing, relate to things he did when I was a kid. That's not too far from the advice we give young software developers: write the programs you need or want to use. It's okay to start with what you like and know even if no one else wants those things. First of all, maybe they do. And second, even if they really don't, those are the problems on which you will be willing to work. Programming teaches programming.

Keep in mind: finding ideas isn't enough. You have to do the work. In the end, that is the measure of a writer as well as the measure of a programmer.

If you have already found your passion, then finding cool things to do gets even easier. Passion and obsession seem to heighten our senses, making it easier to spot potential new ideas and solution. I just saw a great example of this in the movie The Social Network, when an exhausted Mark Zuckerberg found the insight for adding Relationship Status to Facebook from a friend's plaintive request for help finding out whether a girl in his Art History class was available.

So, you have an idea. How long does it take to write?

... It takes as long as it takes. A great line, and it's so true of writing. It takes as long as it takes.

Despite what we learn in skill, this is true of most things. They take however long they take. This was a hard lesson for me to learn. I was a pretty good student in school, and I learned early on how to prosper in the rhythm of the quarter, semester, and school year. Doing research in grad school helped me to see that real problems are much messier, much less predictable than the previous sixteen years of school had led me to believe.

As a CS major, though, I began to learn this lesson in my last year as an undergrad, writing the program at the core of my senior project. It takes as long as it takes, whatever the university's semester calendar says. Get to work.

As a teacher, I found most touching an answer McPhee gave when asked why he still teaches writing courses at Princeton. He is well past the usual retirement age and might be expected to slow down, or at least spend all of his time on his own writing. Every teacher who reads the answer will feel its truth:

But above all, interacting with my students--it's a tonic thing. Now I'm in my seventies and these kids really keep me alive. To talk to a nineteen-year-old who's really a good writer, and he's sitting in here interested in talking to me about the subject--that's a marvelous thing, and that's why I don't want to stop.

As I read this, my mind began to recount so many students who have changed me as a programmer and teacher. The programmer, artist, and musician who wanted to write a program that could express artistic style, who is now a filmmaker inflamed with understanding man and his relationship to the world. The high school kid with big ideas about fonts, AI, and design whose undergrad research qualified for a national ACM competition and who is now a research scientist at Apple. The brash PR student who wanted to become a programmer and did, writing a computer science thesis and an even more important communications studies thesis, who is now set on changing how we study and understand human communication in the age of the web. The precocious CS student whose ideas were bigger than my courses before he set foot in my classroom, who worked hard learning things beyond what we were teaching and eventually doubling back to learn what he had missed, an entrepreneur with a successful tech start-up who is now helping a new generation of students learn and dream.

The list could go on. Teaching keeps us alive. Students learn, we hope, and so do we. They keep us in the present, where the excitement of new ideas is fresh. And, as McPhee admits with no sense of shame or embarrassment, it is flattering, too.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 08, 2011 6:07 AM

Perfectly Reasonable Deviations

I recently mentioned discovering a 2005 collection of Richard Feynman's letters. In a letter in which Feynman reviews grade-school science books for the California textbook commission, I ran across the sentence that give the book its title. It stood out to me for more than just the title:

[In parts of a particular book] the teacher's manual doesn't realize the possibility of correct answers different from the expected ones and the teacher instruction is not enough to enable her to deal with perfectly reasonable deviations from the beaten track.

I occasionally see this in my daughters' math and science instruction, but mostly I've been surprised at how well their teachers do. The textbooks often suffer from the ills that Feynman complains about (too many words, rules, and laws to memorize, with little emphasis on understanding. The teachers do a reasonable job making sense of it all. It's a tough gig.

In many ways, university teachers have an easier job, but we face this problem, too. I'm not a great teacher, but one thing I think I've learned since the beginning of my time in the classroom is that students deviate from the beaten track in perfectly reasonable ways all the time. This is true of strong students and weak students alike.

Sometimes the reasonableness of the deviation is a result of my own teaching. I have been imprecise, or I've taught using implicit assumptions my students don't share. These students are learning in an uncertain space, and sometimes they learn differently than I intended. Of course, sometimes they learn the wrong thing, and I need to fix that. But when their deviations are reasonable, I need to recognize that. Sometimes we recognize the new idea and applaud the student for the deduction. Sometimes we discuss the deviation in detail, using the differences as an opportunity to learn more deeply.

Sometimes a reasonable deviation results simply from the creativity of the students. That's a good result, too. It creates a situation in which I am likely to learn as much as, or more than, my students do from the detour.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 26, 2011 12:08 PM

Narrow Caution and Noble Issue

A cautionary note from John Ruskin, in The Stones of Venice:

We are to take care how we check, by severe requirement or narrow caution, efforts which might otherwise lead to a noble issue; and, still more, how we withhold our admiration from great excellencies, because they are mingled with rough faults.

Ruskin was a permission giver.

I found this passage in The Seduction, an essay by Paula Marantz Cohen. Earlier in the piece, she related that many of her students were "delighted" by Ruskin's idea that "the best things shall be seldomest seen in their best form". The students...

... felt they were expected to be perfect in whatever it was they undertook seriously (which might be why they resisted undertaking much seriously).

In the agile software development world, we recognize that fear even short of perfectionism can paralyze developers, and we take steps to overcome the danger (small steps, tests first, pair programming). We teachers need to remember that our high school and college students feel the same way -- and that their feelings are often made even more formidable by the severe requirement and narrow caution by which we check their efforts.

Marantz closes her essay by anticipating that other professors might not like her new approach to teaching, because it "dumbs things down" with shorter reading assignments, shorter writing assignments, and classroom discussion that allows personal feelings. It seems to me, though, that getting students to connect with literature, philosophy, and ideas bigger than themselves is an important win. One advantage of shorter writing assignments was that she was able to give feedback more frequently and thus focused more directly on specific issues of structure and style. This is a positive trade-off.

In the end she noted that, despite working from a much squishier syllabus and with a changing reading list, students did not complain about grades. Her conclusion:

I suspect that students focus on grades when they believe that this is all they can get out of a course. When they feel they have learned something, the grade becomes less important.

I have felt this, both as student and as teacher. When most of the students in one of my classes are absorbed in their grade, it usually means that I am doing something wrong with the class.

Go forth this week and show admiration for the great excellencies in your students, your children, and your colleagues, not only despite the excellencies being mingled with rough faults, but because they are so.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Teaching and Learning

March 24, 2011 10:23 PM

Teachers and Programming Languages as Permission Givers

Over spring break, I read another of William Zinsser's essays at The American Scholar, called Permission Givers. Zinsser talks about importance of people who give others permission to do, to grow, and to explore, especially in a world that offers so many freedoms but is populated with people and systems that erect barriers at every turn.

My first reaction to the paper was as a father. I have recognized our elementary and high schools as permission-denying places in a way I didn't experience them as a student myself, and I've watched running the gauntlet of college admissions cause a bright, eager, curious child to wonder whether she is good enough after all. But my rawest emotions were fear and hope -- fear that I had denied my children permission too often, and hope that on the whole I had given them permission to do what they wanted to do and become who they can be. I'm not talking about basic rules; some of those are an essential part of learning discipline and even cultivating creativity. I mean encouraging the sense of curiosity and eagerness that happy, productive people carry through life.

The best teachers are permission givers. They show students some of what is possible and then create conditions in which students can run with ideas, put them together and take them apart, and explore the boundaries of their knowledge and their selves. I marvel when I see students creating things of beauty and imagination; often, there is a good teacher to be found there as well. I'm sad whenever I see teachers who care deeply about students and learning but who sabotage their students' experience by creating "a long trail of don'ts and can'ts and shouldn'ts", by putting subtle roadblocks along the path of advancement.

I don't think that by nature I am permission giver, but over my career as a teacher I think I've gotten better. At least now I am more often aware of when I'm saying 'no' in subtle and damaging ways, so that I can change my behavior, and I am more often aware of the moments when the right words can help a student create something that matters to them.

In the time since I read the essay, another strange connection formed in my mind: Some programming languages are permission givers. Some are not.

Python is a permission giver. It doesn't erect many barriers that get in the way of the novice, or even the expert, as she explores ideas. Ruby is a permission giver, too, but not to the extent that Python is. It's enough more complex syntactically and semantically that things don't always work the way one first suspects. As a programmer, I prefer Ruby for the expressiveness it affords me, but I think that Python is the more empowering language for novices.

Simplicity and consistency seem to be important features of permission-giving languages, but they are probably not sufficient. Another of my favorite languages, Scheme, is simple and offers a consistent model of programming and computation, but I don't think of it as a permission giver. Likewise Haskell.

I don't think that the tired argument between static typing and dynamic typing is at play here. Pascal had types but it was a permission giver. Its descendant Ada, not so much.

I know many aficionados of other languages often feel differently. Haskell programmers will tell me that their language makes them so productive. Ada programmers will tell me how their language helps them build reliable software. I'm sure they are right, but it seems to me there is a longer learning curve before some languages feel like permission givers to most people.

I'm not talking about type safety, power, or even productivity. I'm talking about the feeling people have when they are deep in the flow of programming and reach out for something they want but can't quite name... and there it is. I admit, too, that I also have beginners in mind. Students who are learning to program, more than experts, need to be given permission to experiment and persevere.

I also admit that this idea is still new in mind and is almost surely colored heavily by my own personal experiences. Still, I can't shake the feeling that there is something valuable in this notion of language as permission giver.

~~~~

If nothing else, Zinsser's essay pointed me toward a book I'd not heard of, Michelle Feynman's Reasonable Deviations from the Beaten Track, a collection of the personal and professional letters written by her Nobel Prize-winning father. Even in the most mundane personal correspondence, Richard Feynman tells stories that entertain and illuminate. I've only begun reading and am already enjoying it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 23, 2011 8:13 PM

SPLASH 2011 and the Educators' Symposium

I have been meaning to write about SPLASH 2011 and especially the Educators' Symposium for months, and now I find that Mark Guzdial has beaten me to the punch -- with my own words, no less! Thanks to Mark for spreading the news. Go ahead and read his post if you'd like to see the message I sent to the SIGCSE membership calling for their submissions. Or visit the call for participation straightaway and see what the program committee has in mind. Proposals are due on April 8, only a few weeks hence. Dream big -- we are.

For now, though, I will write the entry I've been intending all these months:

The Next Ten Years of Software Education

SPLASH 2011 in Portland, Oregon

By the early 2000s, I had become an annual attendee of OOPSLA and had served on a few Educators' Symposium program committees. Out of the blue, John Vlissides asked me to chair the 2004 symposium. I was honored and excited. I eventually got all crazy and cold called Alan Kay and asked him to deliver our keynote address. He inspired us with a vision and ambitious charge, which we haven't been able to live up to yet.

When I was asked to chair again in 2005, we asked Ward Cunningham to deliver our keynote address. He inspired us with his suggestions for nurturing simple ideas and practices. It was a very good talk. The symposium as whole, though, was less successful at shaking things than in 2004. That was likely my fault.

I have been less involved in the Educators' Symposium since 2006 or 2007, and even less involved in OOPSLA more broadly. Being department head keeps me busy. I have missed the conference.

Fast-forward to 2010. OOPSLA has become SPLASH, or perhaps more accurately been moved under the umbrella of SPLASH. This is something that we had talked about for years. 2011 conference chair Crista Lopes was looking for a Educators' Symposium chair and asked me for any names I might suggest. I admitted to her that I would love to get involved again, and she asked me to chair. I'm back!

OOPSLA was OO, or at least that what its name said. It had always been about more, but the name brand was of little value in a corporate world in which OOP is mainstream and perhaps even passe. Teaching OOP in the university and in industry has changed a lot over the last ten years, too. Some think it's a solved problem. I think that's not true at all, but certainly many people have stopped thinking very hard about it.

In any case, conference organizers have taken the plunge. SPLASH != OOPSLA and is now explicitly not just about OO. The new conference acknowledges itself to be about programming more generally. That makes the Educators' Symposium something new, too, something more general. This creates new opportunities for the program committee, and new challenges.

We have decided to build the symposium around a theme of "The Next Ten Years". What ideas, problems, and technologies should university educators and industry trainers be thinking about? The list of possibilities is long and daunting: big data, concurrency, functional programming, software at Internet scale... and even our original focus, object-oriented programming. Our goal for the end of the symposium is to be able to write a report outlining a vision for software development education over the next ten years. I don't expect that we will have many answers, if any, but I do expect that we can at least begin to ask the right questions.

And now here's your chance to help us chart a course into the future, whether you plan to submit a paper or proposal to the symposium:

Who would be a killer keynote speaker?

What person could inspire us with a vision for computer science and software, or could ask us the questions we need to be asking ourselves?

Finding the right keynote speaker is one of the big questions I'm thinking about these days. Do you have any ideas? Let me know.

(And yes, I realize that Alan Kay may well still be one of the right answers!)

In closing, let me say that whenever I say "we" above, I am not speaking royally. I mean the symposium committee that has graciously offered their time and energy to designing implementing this challenge: Curt Clifton, Danny Dig, Joe Bergin, Owen Astrachan, and Rick Mercer. There are also a handful of people who have been helping informally. I welcome you to join us.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 10, 2011 9:21 PM

SIGCSE Day 2 -- Limited Exposure

For a variety of reasons, I am scheduled for only two days at SIGCSE this year. I did not realize just how little time that is until I arrived and started trying to work in all the things I wanted to do: visit the exhibits, attend a few sessions and learn a new thing or two, and -- most important -- catch up with several good friends.

It turns out that's hard to do in a little more than a day. Throw in a bout of laryngitis in the aftermath of a flu-riddled week, and the day passed even more quickly. Here are a few ideas that stood out from sessions on either end of the day.

Opening Keynote Address

Last March I blogged about Matthias Felleisen winning ACM's Outstanding Educator Award. This morning, Felleisen gave the opening address for the conference, tracing the evolution of his team's work over the last fifteen years in a smooth, well-designed talk. One two-part idea stood out for me: design a smooth progression of teaching languages that are neither subset nor superset of any particular industrial-strength language, then implement them, so that your tools can support student learning as well as possible.

Matthias's emphasis on the smooth progression reminds me of Alan Kay's frequent references to the fact that English-speaking children learn the same language used by Shakespeare to write our greatest literature, growing into it over time. One of his goals for Smalltalk, or whatever replaces it, is a language that allows children to learn programming and grow smoothly into more powerful modes of expression as their experience and cognitive skills grow.

Two Stories from Scratch

At the end of the day, I listened in on a birds-of-a-feather session about Scratch, mostly in K-12 classrooms. One HS teacher described how his students learn to program in Scratch and then move onto a "real language". As they learn concepts and vocabulary in the new language, he connects the new terms back to their concrete experiences in Scratch. This reminded me of a story in one of Richard Feynman's books, in which he outlines his father's method of teaching young Richard science. He didn't put much stock in learning the proper names of things at first, instead helping his son to learn about how things work and how they relate to one another. The names come later, after understanding. One of the advantages of a clean language such as Scratch (or one of Felleisen's teaching languages) is that it enables students to learn powerful ideas by using them, not by memorizing their names in some taxonomy.

Later in the session, Brian Harvey told the story of a Logo project conducted back in the 1970s, in which each 5th-grader in a class was asked to write a Logo program to teach a 3rd-grader something about fractions. An assignment so wide open gave every student a chance to do something interesting, whatever they themselves knew about fractions. I need to pull this trick out of my teaching toolbox a little more often.

(If you know of a paper about this project, please send me a pointer. Thanks.)

~~~~

There is one unexpected benefit of a short stay: I am not likely to leave any dynamite blog posts sitting in the queue to be written, unlike last year and 2008. Limited exposure also limits the source of triggers!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 09, 2011 11:31 PM

SIGCSE Day 1 -- Innovative Approaches for Introducing CS

SIGCSE 2011 in Dallas, Texas

I'm in Dallas for a couple of days for SIGCSE 2011. I owe my presence to Jeff Forbes and Owen Astrachan, who organized a pre-conference workshop on innovative approaches for introducing computer science and provided support for its participants, courtesy of their NSF projects.

The Sheraton Dallas is a big place, and I managed to get lost on the way to the workshop this morning. As I entered the room fifteen minutes late, Owen was just finishing up talking about something called the Jinghui Rule. I still don't know what it is, but I assume it had something to do with us not being able to use our laptops during much of the day. This saves you from reading a super-long breakdown of the day, which is just as well. The group will produce a report soon, and I'm sure Jeff and Owen will do more complete job than I might -- not least of which because we all produced summaries of our discussion throughout the day, presented them to the group as a whole, and submitted them to our leaders for their use.

The topics we discussed were familiar ones, including problems, interdisciplinary approaches, integrative approaches, motivating students, and pedagogical issues. Even still, the discussions were often fresh, as most everyone in the room wrestles with these topics in the trenches and is constantly trying new things.

I did take a few notes the old-fashioned way about some things that stood out to me:

  • Owen captured the distinction between "interdisciplinary" and "integrative" well; here is my take. Interdisciplinary approaches pull ideas from other areas of study into our CS courses as a way to illustrate or motivate ideas. Integrative approaches push CS techniques out into courses in other areas of study where they become a native part of how people in those disciplines work.
  • Several times during the day people mentioned the need to "document best practices" of various sorts. Joe Bergin was surely weeping gently somewhere. We need more than disconnected best practices; we need a pattern language or two for designing certain kinds of courses and learning experiences.
  • Several times during the day talk turned to what one participant termed student-driven discovery learning. Alan Kay's dream of an Exploratorium never strays far from my mind, especially when we talk about problem-driven learning. We seem to know what we need to do!
  • A group of us discussed problems and big data in a "blue sky" session, but the talk was decidedly down-to-earth: the need to format, sanitize, and package data sets for use in the classroom.
  • One of the biggest challenges we face is the invisibility of computing today. Most everyone at the workshop today views computing's ubiquity as a great opportunity, and I often feel the same way. But I fear the reality is that, for most everyone else, computing has disappeared into the background noise of life. Convincing them that it is cool to understand how, say, Facebook works may be a tougher task than we realize.

Finally, Ge Wang demoed some of the cool things you can do with an iPhone using apps like those from Smule. Wow. That was cool.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 11, 2011 4:15 PM

DIY Empirical Analysis Via Scripting

In my post on learning in a code base, I cited Michael Feathers's entry on measuring the closure of code. Michael's entry closes with a postscript:

It's relatively easy to make these diagrams yourself. grep the log of your VCS for lines that depict adds and modifications. Strip everything except the file names, sort it, run it through 'uniq -c', sort it again, and then plot the first column.

Ah, the Unix shell. A few years ago I taught a one-unit course on bash scripting, and I used problems like this as examples in class. Many students are surprised to learn just how much you can do with a short pipeline of Unix commands, operating on plain text data pulled from any source.

You can also do this sort of thing almost as easily in a more full-featured scripting language, such as Python or Ruby. That is one reason languages like them are so attractive to me for teaching programming in context.

Of course, using a powerful, fun language in CS1 creates a new set of problems for us. A while back, a CS educator on the SIGCSE mailing list pointed out one:

Starting in Python postpones the discovery that "CS is not for me".

After years of languages such as C++, Java, and Ada in CS1, which hastened the exit of many a potential CS major, it's ironic that our new problem might be students succeeding too long for their own good. When they do discover that CS isn't for them, they will be stuck with the ability to write scripts and analyze data.

With all due concern for not wasting students' time, this is a problem we in CS should willingly accept.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 10, 2011 4:04 PM

This and That: Problems, Data, and Programs

Several articles caught my eye this week which are worth commenting on, but at this point none has triggered a full entry of its own. Some of my favorite bloggers do what they call "tab sweeps", but I don't store cool articles in browser tabs. I cache URLs and short notes to myself. So I'll sweep up three of my notes as a single entry, related to programming.

Programmer as Craftsman

Seth Godin writes about:

... the craftsperson, someone who takes real care and produces work for the ages. Everyone else might be a hack, or a factory guy or a suit or a drone, but a craftsperson was someone we could respect.

There's a lot of talk in the software development world these days about craftsmanship. All the conversation and all the hand-waving boil down to this. A craftsman is the programmer we all respect and the programmer we all want to be.

Real Problems...

Dan Meyer is an erstwhile K-12 math teacher who rails against the phony problems we give kids when we ask them to learn math. Textbooks do so in the name of "context". Meyer calls it "pseudocontext". He gives an example in his entry Connect These Two Dots, and then explains concisely what is wrong with pseudocontext:

Pseudocontext sends two signals to our students, both false:
  • Math is only interesting in its applications to the world, and
  • By the way, we don't have any of those.

Are we really surprised that students aren't motivated to practice and develop their craft on such nonsense? Then we do the same things to CS students in our programming courses...

... Are Everywhere These Days

Finally, Greg Wilson summarizes what he thinks "computational science" means in one of his Software Carpentry lessons. It mostly comes down to data and how we understand it:

It's all just data.

Data doesn't mean anything on its own -- it has to be interpreted.

Programming is about creating and composing abstractions.

...

The tool shapes the hand.

We drown in data now. We collect faster than we can understand it. There is room for more programmers, better programmers, across the disciplines and in CS.

We certainly shouldn't be making our students write Fahrenheit-to-Celsius converters or processing phony data files.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 07, 2011 9:03 PM

Teaching and Learning in a Code Base

In a pair of tweets today, Brian Marick offered an interesting idea for designing instruction for programmers:

A useful educational service: examine a person's codebase. Devise a new feature request that would be hard, given existing code and skill...

... Keep repeating as the codebase and skill improve. Would accelerate a programmer's skill at dealing with normal unexpected change.

This could also be a great way to help each programmer develop competencies that are missing from his or her skill set. I like how this technique would create an individualized learning for each student. The cost, of course, is in the work needed by the instructor to study the codebases and devise the feature requests. With a common set of problems to work on, over time an instructor might be able to develop a checklist of (codebase characteristic, feature request) pairs that covered a lot of the instructional space. This idea definitely deserves some more thought!

Of course, we can sometimes analyze valuable features of a codebase with relatively simple programs. Last month, Michael Feathers blogged about measuring the closure of code, in which he showed how we can examine the Open/Closed Principle in a codebase by extracting and plotting the per-file commit frequencies of source files in a project's version control repository. Feathers discussed how developers could use this information intentionally to improve the quality of their code. I think this sort of analysis could be used to great effect in the classroom. Students could see the OCP graphically for a number of projects and, combined with their programming knowledge of the projects, begin to appreciate what the OCP means to a programmer.

A serendipitous side effect would be for students to experience CS as an empirical discipline. This would help us prepare developers more readily in sync with Feathers's use of analytical data in their practice and CS grads who understand the ways in which CS can and should be an empirical endeavor.

I actually blogged a bit about studying program repositories last semester, for the purpose of understanding how to design better programming languages. That work used program repositories for research purposes. What I like about Marick's and Feathers's recent ideas is that they bring to mind how studying a program repository can aid instruction, too. This didn't occur to me so much back when one of my grad students studied relationships among open-source software packages with automated analysis of a large codebase. I'm glad to have received a push in that direction now.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 03, 2011 3:30 PM

Science and Engineering in CS

A long discussion on the SIGCSE members listserv about math requirements for CS degrees has drifted, as most curricular discussions seem to do, to "What is computer science?" Somewhere along the way, someone said, "Computer Science *is* a science, by name, and should therefore be one by definition". Brian Harvey responded:

The first thing I tell my intro CS students is "Computer Science isn't a science, and it isn't about computers." (It should be called "information engineering.")

I think that this assertion is wrong, at least without a couple of "only"s thrown in, but it is a great way to start a conversation with students.

I've been seeing the dichotomy between CS as science and CS as system-building again this semester in my Intelligent course. The textbook my students used in their AI course last semester is, like nearly every undergrad AI text, primarily an introduction to the science of AI: a taxonomy of concepts, results of research that help to define and delimit the important ideas. It contains essentially no pragmatic results for building intelligent systems. Sure, students learn about state-space search, logic as a knowledge representation, planning, and learning, along with algorithms for the basic methods of the field. But they are not prepared for the fact that, when they try to implement search or logical inference for a given problem, they still have a huge amount of work to do, with little guidance from the text.

In class today, we discussed this gap in two contexts: the gap one sees between low-level programming and high-level programming languages, and the difference between general-purpose languages and domain-specific languages.

My students seemed to understand my point of view, but I am not sure they really grok it. That happens best after they gain experience writing code and feel the gap while making real systems run. This is one of the reasons I'm such a believer in projects, real problems, and writing code. We don't always understand ideas until we see them in a concrete.

I don't imagine that intro CS students have any of the experience they need to understand the subtleties academics debate about what computer science is or what computer scientists. We are almost surely better off asking them to do something that matters them, whether a small problem or a larger project. In these problems and projects, students can learn from us and from their own work what CS is and how computer scientists think.

Eventually, I hope that the students writing large-ish AI programs in my course this semester learn just how much more there is to writing an intelligent system than just implementing a general-purpose algorithm from their text. The teams that are using pre-existing packages as part of they system might even learn that integrating software systems is "more like performing a heart transplant than snapping together LEGO blocks". (Thanks to John Cook for that analogy.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

January 28, 2011 1:35 PM

A Great Job

The latest newsletter from my graduate alma mater mentioned that one of my favorite profs is retiring. I always considered myself fortunate to have studied AI with him in my first term of grad school and later to have worked as a TA with him when he taught the department's intro course. I learned a lot from both experiences. The AI course taught me a lot about how to be a scientist. The intro course taught me a lot about how to be a teacher.

I often hear about how faculty at research schools care only about their research and therefore make for bad teachers in the undergraduate classroom. There are certainly instances of this stereotype, but I think it is not generally true. Active researchers can be bad teachers, but then again so can faculty who aren't active in research. Working as this prof's TA, I saw that even good researchers can be very good undergraduate teachers. He cared about his students, cared about their learning, and prepared his classes carefully. Those are important ingredients for good teaching no matter who is doing it.

I dropped him a quick e-mail to thank him for all his guidance while I was in grad school and to wish him well in retirement. In his response, he expressed a sentiment many teachers will recognize:

I'm sure you have been in the university business long enough to realize what a great job we have. Working with students such as you has been very rewarding.

I have, indeed, been in the university business long enough to realize what a great job we professors have. Over the years, I've had the good fortune to work with some amazing students, both undergrad and grad. That joy comes as part of working with a lot of wonderful young people along their path to professional careers and meaningful lives.

When you add the opportunity to work with students and the opportunity to explore interesting ideas and follow where they might lead us, you get a nearly unbeatable combination. It's good for me to step back every once in a while and remember that.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

January 21, 2011 10:38 AM

A Lot To Learn This Semester

It's been a good start to semester, though too busy to write much.

My Intelligent Systems course shows promise, with four teams of students taking on some interesting projects: recognizing faces, recognizing sign language, classifying transactions in a big-data finance context, and a traditional knowledge-based system to do planning. I am supervising a heavier-than-usual set of six individual research projects, including a grad student working on progressive chess and undergrads working on projects such as a Photoshop-to-HTML translator, an interpreter for a homoiconic OO language, and pure functional data structures. This all means that I have a lot to learn this semester!

I'm also still thinking about the future, as the dean's 3-year review of my work as department head proceeds. Yesterday I watched the video of Steve Jobs's commencement address at Stanford. This time around, his story about the freeing power of death grabbed special attention. Jobs gets up each day and asks himself, "If this is your last day on Earth, will you be happy doing what you are doing today?" If the answer is "no" too many days in a row, then he knows he needs to make a change.

That's a bracing form daily ritual. When it comes to this level of self-honesty, on most days I feel more like Howard W. Campbell, Jr. than Steve Jobs. I think I also finally grok this aphorism, a favorite saying of a friend: "Real men don't accept tenure". It can be an unhealthy form of academic immortality.

The question I ask myself more often than not these days is, "Are you programming?" Let me substitute "programming" for "writing" in a passage by Norman Fischer quoted at What I've Learned So Far:

... programming is a sort of absolute bottom line. "Are you programming?" If the answer is yes, then no matter what else is going on, your life -- and all of life -- is basically OK. You are who you are supposed to be, and your existence makes sense. If the answer is no, then you are not doing well, your relationships and basic well-being are in jeopardy, and the rest of the world is dark and problematic.

A day without writing code is like, you know, night. (With apologies to Steve Martin.)


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

January 13, 2011 3:42 PM

A New Semester Begins

Every new course starts with a tingle of the unknown. How will this semester go? Will I enjoy it? Will my students? Will they learn what they want and need to learn? Will I?

No matter how many years I teach, or how well I know the students in my class, I feel the same way. I loved this as a student, too. One of then reasons I liked my universities' trimester systems was that I got to feel it three times a year, not just two.

I am completing the first week of teaching Intelligent Systems, one of my department's capstone project courses for majors. I used to teach this course every spring, but I haven't taught the course since 2002, when I went on sabbatical and let a junior colleague take on our AI sequence. Imagine my surprise to find that this area of computer science has changed some in eight years! That makes teaching the course again fun and challenging.

Students are beginning to form teams and hone in on systems to build. This group has an interesting history with me. Some I had in class last term. Some last took class with me last fall or last summer. One last took a class with in Fall 2006, when I introduced media computation to our CS1 course. A few are seeing me in class for the first time, after receiving department e-mail from me for years.

I feel some pressure teaching this course. Eight years is a long time in CS Time and in Internet Time. Change happens, and accelerates. I have to refamiliarize myself with what's state of the art. Not having taught AI over the same period, I have to refamiliarize myself with what students find interesting in AI these days. That's fun, and there's some comfort in knowing that AI has a certain evergreen appeal to young minds. Games, machine learning, and "real-world problems" always seem to interest some students.

More pressure... This is a course I prefer to teach with little or no lecture. Every day, I potentially face the question that scares most of us, at least a little bit: What will I do today? I have a general plan for the course, but I can't script it. Much of how the course proceeds depends on what the students think and do. I've been reactive so far, in what I feel is a good way. On the first day, I asked students to fill out a short survey on their background, interests, and goals for the course. On the second day, my remarks responded to what they wrote on the surveys and connected those answers with recent experiences of my own and on the sort of problems we face in computing these days. Among these is the way "big data" interacts with the time and space constraints we always face in CS.

I am excited.

My initial inclination after class was to tweet. That would have been quick and easy. Sometimes, that's the right way to go. 140 characters is perfect for a pithy observation. But I realized that my immediate observation after class was unfolding other thoughts about the week and the course, and about how I am feeling. This is one of the reasons I blog: to encourage myself to think further, to reflect more deeply, and to decide what things mean. So I blogged instead. My initial inclination for a tweet became the first line of my entry. It will also make a nice tweet announcing the blog entry!


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 03, 2011 3:33 PM

It Pays To Be Specific

While grading last month, one of my colleagues sent me a note:

A student handed in the last assembly language assignment. It was very well done. Very professional.

One tiny problem: it was in IBM PC assembly, not the IBM mainframe assembly that I have been teaching.

I guess it's always good to be specific when you order programming online.

This was funny and sad at the same time. Much has been made in the last couple of months of paper-writing for hire, and we in CS have been talking about the problem of seeking out solutions on-line, even custom-written programs, for a while now. But we seem to be entering a new world of programming for hire. It creates pragmatic challenges for instructors on top of what is already a challenging enough task.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 20, 2010 3:32 PM

From Occasionally Great to Consistently Good

Steve Martin's memoir, "Born Standing Up", tells the story of how Martin's career as a stand-up comedian, from working shops at Disneyland to being the biggest-selling concert comic ever at his peak. I like hearing people who have achieved some level of success talk about the process.

This was my favorite passage in the book:

The consistent work enhanced my act. I Learned a lesson: It was easy to be great. Every entertainer has a night when everything is clicking. These nights are accidental and statistical: Like the lucky cards in poker, you can count on them occurring over time. What was hard was to be good, consistently good, night after night, no matter what the abominable circumstances.

"Accidental greatness" -- I love that phrase. We all like to talk about excellence and greatness, but Martin found that occasional greatness was inevitable -- a statistical certainty, even. If you play long enough, you are bound to win every now and then. Those wines are not achievement of performance so much as achievements of being there. It's like players and coaches in athletics who break records for the most X in their sport. "That just means I've been around a long time," they say.

The way to stick around a long time, as Martin was able to do, is to be consistently good. That's how Martin was able to be present when lightning struck and he became the hottest comic in the world for a few years. It's how guys like Don Sutton won 300+ games in the major leagues: by being good enough for a long time.

Notice the key ingredients that Martin discovered to becoming consistently good: consistent work; practice, practice, practice, and more practice; continuous feedback from audiences into his material and his act.

We can't control the lightning strikes of unexpected, extended celebrity or even those nights when everything clicks and we achieve a fleeting moment of greatness. As good as those feel, they won't sustain us. Consistent work, reflective practice, and small, continuous improvements are things we can control. They are all things that any of us can do, whether we are comics, programmers, runners, or teachers.


Posted by Eugene Wallingford | Permalink | Categories: General, Running, Software Development, Teaching and Learning

December 18, 2010 11:34 AM

Some People Get Stuff Done

On Thursday, my students presented their Klein compilers. Several of the groups struggled with code generation, which is a common experience in a compiler course. There are a lot of reasons, most prominently that it's a difficult task. (Some students' problems were exacerbated by not reading the textbook...)

Still, all four groups managed to get something working for a subset of the language. They worked really hard, sometimes trying crazy ideas, all in an effort to make it work.

Over the years, I have noticed that some students have this attribute: they find a way to get things done. Whatever constraints they face, even under sub-optimal conditions they create for themselves, they find a way to solve the problem or make the program meet the spec. I'm surprised how often some students get things done despite not really understanding what they are doing! (Of course, sometimes, you just gotta know stuff.)

This reminds me of a conversation I had at Clemson University back in 1994 or 1995. I was attending and NSF workshop on closed labs. We were attending the midweek social event that seems de rigeur at weeklong workshops, chatting with some Clemson CS profs who had joined us for the evening and some AP CS graders who were also stationed at Clemson for the week. The AP folks talking about grading programs, the sort our students write in AP CS, CS1 and CS2.

One Clemson prof was surprised by how much weight the CS1 profs give to style, documentation, and presentation, relative to correctness. He said that he taught CS1 differently. Programming is hard enough, he said, that if you can find students who can wrote code, you should do whatever you can to encourage and develop them. We can teach style, presentation, and documentation standards to those students. Trying to teach more advanced programming skills to people who produce nice-looking programs but don't seem to "get it" is much, much harder.

He was expressing a preference for students who get stuff done.

In practice, students who major in CS from all across the spectrum. As a professor, I would like for my courses and our academic programs to help develop the "gets things done" attribute in our students, wherever they start along the spectrum. This requires that we help them grow not only in knowledge but also work habits. Perhaps most important is to help students develop a certain attitude toward problems, a default way of confronting the challenges they invariably encounter. Attitudes do not change easily, but they can evolve slowly over time. We profs can set a good example in how we confront challenges in class. We can also create conditions that favor a resilient approach to tough problems.

It was good for me to end the semester -- and the 2010 calendar year -- seeing that, whether by nature or nurture, some of our CS majors manage to get stuff done. That bodes well for success when they leave here.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

December 14, 2010 4:25 PM

Rehabilitating the Reputation of Lecture

Last week, I followed a links from John Cook to the essay In Praise of the Lecture. This is countercultural among academics these days. As you probably know, lecture is out of fashion. My colleagues in the sciences still rely on them pretty heavily, some proudly and defiantly, but even many scientists speak as lectures are a bad thing. They are dead, boring. Because they don't involve the student, they don't engage the student's mind. As a result, learning doesn't happen.

I have some courses in which I lecture almost not at all. In most others, I mix lecture and exercises to create a semi-active classroom. Right now,, though, I am coming out of my compilers course, in which I lecture a lot. Looking back, I know that on many days I need to generate more excitement for my students. Yet the simple fact is that this course material seems to require more lecture than exercise on most days. Whatever compilers textbook I use, I feel as if I need to explain ideas and techniques for a second time in hopes of making them clear.

The thing is, lecture doesn't have to be boring. The class periods I recall from my undergrad days were all lectures, and I recall them fondly, with admiration for the speaker and deep appreciation for the subject. The best lecturers roused emotion in their audience and exhorted us to action. A great lecture has a different rhythm from other class sessions. "In Praise of the Lecture" reminds me that this is true.

I especially enjoyed Carter's section titled "The Lecture as a Personal Act". Here is a snippet:

Closely related to the idea of the lecture as a moral act is the idea of the lecture as a personal act. True education is always about personal growth toward the Truth. Some would charge the lecture with being the paradigmatic act of arrogance: one person stands there with all the truth while the rest sit quietly as supplicants. But this is to distill the university experience into only one of its moments, as if the slow movement of the pendulum to the right were never balanced by its eventual arc back to the left. To read in preparation and to argue in response are the parts of the educational experience set in motion by the lecture, which acts a fulcrum.

In order for a lecture to work, students must be engaged: not in some active exercise within the class period, but across a broader swath of time. First, they must read in order to ready their minds. Then comes the lecture, which tells a second story. It is a living incarnation of what the author describes. Students see what the professor focuses on, what the professor chooses to leave in, what the professor omits, and what excites the professor. They are introduced to questions that lie at the edges of the ideas and techniques and lines of code. The best lecturers do this in context, so that students hear a story personalized to that time, place, and audience.

Finally, students must engage the ideas afterwards. In the humanities, this might take the form of discussion in which everyone in the room, lecturer included, argue their cases -- and maybe even someone else's case -- in order to really understand. That can happen in the sciences, too, but just as important for the science student is application: taking ideas into the lab to try them out, see when they work and when they don't, bend them to fit the peculiarities of a real problem. Only then are they and the professor ready to have a truly enlightening discussion.

The biggest problem with lecture may be that it expects so much of the students who hear it. They must read and think before class. They must apply ideas and think after. Building courses around in-class exercises may well lead students to think that all the learning will happen in class and discourage the kinds of learning that happen outside of class. I realize that this likely overstates the case, and romanticizes the lecture, but I still think there is some truth in it.

Lecture also expects more of the teacher. It's easy to give boring lectures by reading old notes or cribbing from the mind-numbing slide deck that seems to come as a matter of course with every textbook these days. To lecture well, the teacher must engage the material, too, and personalize it, both to herself and to her students. That's what Carter means when he says that a lecture occurs in a specific place and time. Whether we admit it or not, a lecture is a personal act, whether done well or not. It is and should be unique. Perhaps this is why I feel I have to rework every lecture every time I deliver it.

Actually, I want to. That's how I engage the material and make it say what Eugene Today feels and thinks. Even with a course on compiler construction, I change from offering to offering, and how I present the material must change. Besides, every lecture I give can be better. I need to work toward that goal a little every time I give them.

I'm not suggesting that every course be taught in a lecture format, or even every session of any course. I will continue to use in-class exercises, discussion, and any other technique I can to make my courses as effective as I can. I'm just saying that lecture has a place, and just maybe it can help us to create the right expectations for our learning environments.

In the end, whether we lecture or discuss, whether we use group exercises or clicker systems and multiple choice questions, it all probably comes down to this:

"My students do not learn what I teach them. They learn what I am excited about."

Please forgive me if this comes off sounding a bit too romantic for a computer science professor. The semester is coming to a close, and we are in the Christmas season.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 11, 2010 11:55 AM

Don't Forget to Solve Your Problem a Second Time

The Toyota Production System encourages workers to solve every problem two ways. First, you fix the issue at hand. Second, you work back through the system to determine the systemic reason that the issue arose. In this way, you eliminate this as a problem in the future, or at least make it less likely.

As I wrote recently, I don't think of writing software as a production process. But I do think that software developers can benefit from the "solve it twice" mentality. When we encounter a bug in our program or a design problem in our system or a delivery problem on a project, we should address the specific issue at hand. Then we should consider how we might prevent this sort of problem from recurring. There are several ways that we might improve:

  • We may need better or different tools.
  • We may be able to streamline or augment our process.
  • We may need to think about different things while working.
  • We may need to know something more deeply, or something new.

This approach would benefit us as university students and university professors, too. If students and professors thought more often in terms of continuous improvement and committed to fixing problems the second time, too, we might all have lower mean times to competence.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 10, 2010 3:43 PM

Mean Time to Competence and Mean Time to Mastery

I'm on a mailing list with a bunch of sports fans, many of whom are also CS academics and techies. Yesterday, one of the sysadmins posted a detailed question about a problem he was having migrating a Sun Solaris server to Ubuntu, due to a conflict between default /etc/group values on the two operating systems. For example, the staff group has a value of 50 on Ubuntu and a value of 10 on Solaris. On Ubuntu, the value 10 identifies the uucp group.

Another of the sysadmins on the list wrote an even more detailed answer, explaining how group numbers are supposed to work in Unix, describing an ideal solution, and outlining a couple of practical approaches involving files such as /etc/nsswitch.conf.

After I read the question and the answer, I sent my response to the group:

Thank goodness I'm a CS professor and don't have to know how all this works.

I was joking, of course. Some people love to talk about how CS profs are disconnected from computing in the real world, and this is the sort of real-world minutia that CS profs might not know, or even have to know if they teach courses on algorithms, intro programming, or software engineering. After seeing my friends' exchange and seeing all that Unix guru-speak, I was happy to play to the stereotype.

Of course, the numbers used to implement Unix group numbers really are minutia and something only practicing sysadmins would need to know. The professor who teaches our systems courses is deeply versed in these details, as are the prof or two who manage servers for their courses and research. There certainly are CS academics divorced from reality, but you can say that of any group of people. Most know what they need to know, and a bit more.

Later in the day, I became curious about the problem my friends had discussed, so I dug in, studied a bit, and came to understand the problem and candidate solutions. Fun stuff.

Our department has recently been discussing our curriculum and in particular the goals of our B.A. and B.S. programs: What are the desired outcomes of each program? That is jargon for the simpler, "When students graduate, what would we like for them to be able to do?" For departments like ours, that means skills such as being able to write programs of a certain size, design a database, and choose appropriate data structures for solving a specific problem.

I was thinking about the Unix exchange on my mailing list in this context. Let's be honest... There is a lot of computer science, especially CS as it is applied in specific technology, that I don't know. Should I have known to fix my friend's problem? Should our students? What can we reasonably expect of our students or of ourselves as faculty?

Obviously, we professors can't know everything, and neither can our students. That is true in any discipline and especially in one like CS, which changes and grows so fast. This is one of the reasons it is so important for us to define clearly what we expect our programs to achieve. The space of computing knowledge and skills is large and growing. Without a pretty good idea of what we are hoping to achieve with our courses and curricula, our students could wander around in the space aimlessly and not having anything coherent to show for the time or effort. Or the money they spent paying tuition.

So, when I first read my friends' messages about Unix groups, I didn't know how to solve the problem. And that's okay, because I can't know everything about CS, let alone every arcane detail of every Unix distro out in the world. But I do have a CS degree and lots of experience. What difference should that make when I approach problems like this? If one of our graduates confronts this situation or one like it, how will they be difference from the average person on the street, or even the average college graduate?

Whatever specific skills our graduates have, I think that they should be able to come up to speed on computing problems relatively quickly. They should have enough experience with a broad set of CS domains and enough theoretical background to be able to make sense of unfamiliar problems and understand candidate solutions. They should be able to propose solutions that make sense to a computer scientist, even if the solutions lack a detailed knowledge of the domain.

That is, CS graduates should have a relatively low mean time to competence in most sub-areas of computing, even the ones they have not studied in detail yet.

For a smaller set of sub-areas, our students should also have a relatively low mean time to mastery. These are the areas they have studied in some detail, either in class or through project work, but which they have not yet mastered. A CS degree should put them in a position to master them more quickly than most educated non-computer scientists.

Mean time to competence (MTTC) and mean time to mastery (MTTM) are actually a big part of how I distinguish a university education from a community college education when I speak to prospective students and their parents, though I have never used those terms before. They always wonder about the value of technical certifications, which community college programs often stress, and why my department does not make study for certification exams an explicit goal for students.

We hope, I tell them, to put the student in a position of being ready to prepare for any certification exam in relatively short order, rather than spending a couple of years preparing them to take a specific exam. We also hope that the knowledge and experience they gain will prepare them for the inevitable developments in our discipline that will eventually make any particular certification obsolete.

I am not certain if mean time to competence and mastery are student learning outcomes in the traditional educational jargon sense of the word, or whether they are abstractions of several more concrete outcomes. In any case, I am left thinking about how we can help to create these outcomes in students and how we can know whether we are successful or not. (The agile developer in me says, if we can't figure out how to test our program, we need to break the feature down into smaller, more concrete steps.)

Whatever the practical challenges of curriculum design and outcomes, I think MTTC and MTTM are essential results for a CS program to generate. Indeed, they are the hallmark of a university education in any discipline and of education in general.

Now, to figure out how to do that.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 07, 2010 4:24 PM

Compilers Course Winding Down

We are in the last week of classes for the semester. I have not blogged much on my compilers course this time around, in fact not since the first week of classes. Part of that is being busy in other parts of my job, and part is that the course has gone relatively smoothly. I have a good set of students who have made steady progress writing their compilers. Our source language is a simple functional language, which gives us a good chance of having all four teams produce complete or nearly-complete systems. So I've not faced many problems teaching the course, and problems are the most salient triggers for me writing blog entries!

One thing I have noticed going well this semester is that several of the teams have been consciously taking small steps in the development process. That has helped those teams collaborate well internally and to learn together as they write parts of their program.

One thing I know that I can improve next time around is how I introduce and illustrate code generation. This is always one of the toughest phases of the compiler for most students, because they have so little experience with machine organization and assembly language programming, and what little they have came two or even three years ago. This term, I reorganized some of the earlier material and had an extra day or so in which to discuss code generation, but I did not put the time to good use.

Students need to see more concrete examples of code generation sooner to help them bridge the gap between AST and what their compiler must do. I fell into a common trap for professors: talking about things a bit too much and not showing and doing things often enough. I already have some ideas for how to fix this in the next iteration of the course. Prominent among them is working in class with the students to write small assembly language snippets and to produce a small code generator that illustrates some of the important ideas we discuss.

Fortunately, my students this time around seem to be on the road to success, whatever the shortcomings in how I've taught the course. This comes through their own efforts and through asking lots of questions outside of class. Good students usually make something good happen regardless of the conditions they face. We professors need to be thankful for this more often!

As Mike Feathers points out in his recent RailsConf 2010 address, we are all novices some of the time, whether it is in our problem domain or or solution domain. The real key is, do we learn something and get better? My students this semester seem to be doing that. I hope I am, too.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 26, 2010 4:58 PM

Mindset, Faith, and Code Retreats

If I were more honest with myself, I would probably have to say something like this more often:

He also gave me the white book, XP Explained, which I dismissed outright as rubbish. The ideas did not fit my mental model and therefore they were crap.

Like many people, I am too prone to impose my way of thinking and working on everything. Learning requires changing how I think and do, and that can only happen when I don't dismiss new ideas as wrong.

I found that passage in Corey's Code Retreat, a review of a code retreat conducted by Corey Haines. The article closes with the author's assessment of what he had learned over the course of the day, including this gem:

... faith is a necessary component of design ...

This is one of the hardest things for beginning programmers to understand, and that gets in the way of their learning. Without much experience writing code, they often are overwhelmed by the uncertainty that comes with making anything that is just beyond their experience. And that is where the most interesting work lies: just beyond our experience.

Runners training for their first marathon often feel the same way. But experience is no antidote for this affliction. Despair and anger are common emotions, and they sometimes strike us hardest when we know how to solve problems in one way and are asked to learn a new way to think and do.

Some people are naturally optimistic and open to learning. Others have cultivated an attitude of openness. Either way, a person is better prepared to have faith that they will eventually get it. Once we have experience, our faith is buttressed by our knowledge that we probably will reach a good design -- and that, if we don't, we will know how to respond.

This article testifies to the power of a reflective code retreat led by a master. After reading it, I want to attend one! I think this would be a great thing for our local software community to try. For example, a code retreat could help professional programmers grok functional programming better than just reading books about FP or about the latest programming language.

~~~~

The article also opens with a definition of engineering I had not seen before:

... the strategy for causing the best change in a poorly understood situation within the available resources ...

I will think about it some more, but on first and second readings, I like this.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 10, 2010 10:55 AM

Teachers Need Relational Understanding

Last week I was pointed in the direction of Richard Skemp's 1976 article, Relational Understanding and Instrumental Understanding, on how peoples' different senses of the word "understanding" can create dissonance in learning. Relational understanding is what most of us usually mean by understanding: knowing how to solve a problem and knowing on a deeper level why that works and how it relates to the fundamental ideas of the domain. Sometimes, though, we settle for instrumental understanding, simply knowing the rules that enable us to solve a problem. Skemp describes the cognitive mismatch that occurs when a teacher is thinking in terms of one kind of understanding and the student in terms of the other. Throw in a textbook that aims at one or the other, and it's no wonder that teachers and students sometimes have a difficult time working together.

The paper is a little long for its message, but Skemp does try to cover a lot of ground in his own slow realization that teachers need to see this dissonance as a concrete challenge to their efforts to help students learn. He even considers the sort of situations in which a teacher may have to settle for giving students an instrumental understanding of a topic. But one thing is clear:

... nothing else but relational understanding can ever be adequate for a teacher.

I know that when I am weakest as a teacher, it is either because I am underprepared for a particular lesson or because my understanding of a topic is instrumental at best.

I often hear teachers at all levels talk about teaching a new course by staying one chapter ahead of the students in the textbook. While there may situations in which this approach is unavoidable, it is always less than ideal, and any teacher who does it is almost necessarily shortchanges the students. Teaching is so much more than presenting facts, and if all the teacher knows today is the facts his or her students will be seeing a week or so hence, there is no way that student learning can tie ideas together or push beyond 'how' to 'why'.

When I think about teaching computer science and especially programming, I think of three levels of activity that give me different levels of understanding:

  1. reading about something, even extensively
  2. doing something, applying knowledge in practice
  3. understanding something at a deeper level

At the third level, I know not only how to solve problems, but when and how to break the rules, and when and how to reason from first principles to create a new method of attack. When I am at my best as a teacher, I feel fluid in the classroom, knowing that my deep understanding of an area has prepared me for nearly any situation I might encounter.

I'll close with this quote from Skemp, which alone was worth reading the paper for:

... there are two kinds of simplicity: that of naivety; and that which, by penetrating beyond superficial differences, brings simplicity by unifying.

Many people talk about the virtue of simplicity, but this sentence captures in fewer than two dozen words two very different senses of the word and expresses that the best kind of simplicity both grasps differences and unifies over them.

That is what relational understanding is about.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 02, 2010 9:38 PM

Your Voice Cannot Command

I've been carrying this tune in my mind for the last couple of days, courtesy of singer-songwriter and fellow Indianapolis native John Hiatt:

So whatever your hands find to do
You must do with all your heart
There are thoughts enough
To blow men's minds and tear great worlds apart

...

Don't ask what you are not doing
Because your voice cannot command
In time we will move mountains
And it will come through your hands

One of my deepest hopes as a parent is that I can help my daughters carry this message with them throughout their lives.

I also figure I'll be doing all right as a teacher if my students take this message with them when they graduate, whether or not they remember anything particular about design patterns or lambda.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

October 01, 2010 3:00 PM

Theory and Practice in Education Classrooms

The Fordham Institute has just published Cracks in the Ivory Tower?, a national survey of K-12 education professors. The publication page summarizes one finding that points out a gap between what happens in education courses and what many of us might think happens there:

The [education] professors see themselves as philosophers and evangelists, not as master craftsmen sharing tradecraft with apprentices and journeymen.

I have seen this gap in my own university's College of Education, with many of its required courses not adding much to the daily practice of teaching. Unfortunately, I've seen another gap in the tension between programs focused on teacher training, like the one at my school, and theoretical research-driven education programs at our R-1 sister schools. Many of the professors at those schools view what is taught at our school as too applied!

Most of us in computer science who are looking to help K-12 teachers use ideas from CS in their courses tend to be focused on helping teachers in the trenches. There isn't much value in us teaching, say, high school teaches a bunch of CS theory that is disconnected from what they do in their classrooms. They already have so much to teach and test that there really isn't room for a bunch of new content, and besides, most of them aren't all that interested in CS theory for its own sake. (That's true of many CS people themselves, of course.)

With help from Google this summer, my department offered CS4HS Iowa 2010, to introduce computing to K-12 science and math teachers using simulations in Scratch. The course looked at some CS ideas at the abstract level, but the meat of the course was practical techniques, both technical and pedagogical. Our hope was that an 8th grade math teacher or an 11th grade science teacher might be able to use computing to help them teach their own courses more effectively.

Mark Guzdial responds to the Fordham Institute report with several thoughtful observations. I certainly agree with this caution:

On the other hand, I don't share the sense in the report that if we "fixed" teacher education, we would "fix" teachers. I learned when I was an Education graduate student that pre-service teacher education is amazingly hard to fix.

I learned this only in the last few years, by participating in statewide meetings aimed at improving the state of STEM education in Iowa. The number and diversity of stakeholders at the table is often overwhelming, almost ensuring that little or no practical change will occur. Even when you narrow the conversation to professors at all the universities who teach teachers, you run into gaps of the sort highlighted in the report quote above. Even when you narrow the conversation even further to professors at a single university, there can be big gaps between what education professors want to do, what STEM professors think is important, and what the state Department of Education requires.

Guzdial again:

Education professors seek to avoid being merely "vocational instructors," so they emphasize being "change agents" (a term from the report) rather than focusing on developing the tradecraft of teaching. Doesn't this sound a lot like the tensions in computing education?

Yes, indeed. In a field like CS, students need to learn both theory and application if they hope to find ways to use their knowledge upon graduation and be able to stay relevant as the discipline changes over the course of their careers. But there are many challenges to face in trying to meet this two-headed goal. Four (or five) years is a short time. The foundational knowledge that CS faculty has tends to stay the same as the applications of that knowledge change the world, which over time makes it harder for faculty to keep up and not settle down. Without periodic immersion in applied tasks, how can a prof know the patterns of software their students need to know tomorrow?

Education professors face many of the same challenges in their own context. My wife has long argued to me that both CS professors and education professors should be required regularly to work in the trenches, whether that is developing software or teaching a bunch of unruly 7th-grade science students, to keep them grounded in the world outside the university. When I think about the challenge facing graduates of our Colleges of Education, I often wish that more of their education would be devoted to studying their craft at the feet of masters, spending their four years in college moving from apprentice and journeyman and finally to master themselves. They should be learning the patterns of learning and teaching that will help them progress along that path. Building a few courses around something like the pedagogical patterns project would be a great start.

I think you could apply the last three sentences of that paragraph to CS education and improve our outcomes as well.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 27, 2010 8:09 PM

Preconception, Priming, and Learning

Earlier today, @tonybibbs tweeted about this Slate article, which describes how political marketers use the way our minds work to slant how we think about candidates. It reports on some studies from experimental psych that found evidence for a disconcerting assertion:

Reminding people of their partisan loyalties makes them more likely to buy smears about their political opponents.

Our willingness to believe in smears is intricately tied to our internal concepts of "us" and "them." It does not matter how the "us" is defined.... The moment you prompt people to see the world in terms of us and them, you instantly make their minds hospitable to slurs about people belonging to the other group.

I wondered out loud on Twitter whether knowing this about how our minds work can help us combat the effects of such manipulation and thus to act more rationally. Given that the cues in these studies were so brief as to bypass conscious thought, I am not especially hopeful.

As you might imagine, the findings of these studies concern me, not only with regard to the possibility of an informed and rational electorate but also for what it means about my own personal behavior in the political marketplace. Walking across campus this afternoon, though, it occurred to me that as much as I care for the political implications, these findings might have a more immediate effect on my life as an instructor.

Every learner comes to a classroom or to a textbook with preconceptions. As far as I know, the physics education community has done more than other science communities to study the preconceptions novice students bring to their classrooms. However, they have also learned that their intro courses tend to make things worse. We in computer science often make things worse, too, but we don't know much about how or why!

The findings about bias and priming in political communication make me wonder what the implications of bias and priming might be for learning, especially computer science. Are students' preconceptions about CS enough like the partisan loyalties people have in politics to make the findings relevant? I doubt students have the same "us versus them" mentality about technical academic subjects as about politics, but research has shown that many non-CS students think that programmers are a different sort of people than themselves. This might lead to something of a "family" effect.

If so, then the priming effect exposed in the studies might also apply in some way. My first thought was of inadvertent priming, in which we send signals unintentionally that reinforce biases against learning to program or that strengthen misconceptions about computing. I realized later that inadvertent priming could also have positive effects. That side of the continuum seems inherently less interesting to me, but perhaps it shouldn't. It is good to know what we are doing right as well as what we are doing wrong.

My second thought was of how we might intentionally prime the mind to improve learning. Intentional priming is the focus of the Slate article, due to the nefarious ways in which political operatives use it to create misinformation and influence voter behavior. We teachers are in the business of shaping minds, too, but in good ways, affecting both the content and the form of student thinking. Educators should use what scientists learn about how the human mind works to do their job more effectively. This may be an opportunity.

Cognitive psychology is the science that underlies learning and teaching. We educators should look for more ways to use it to do our jobs better.

~~~~

(I need to track down citations for some of the claims I reference above, such as studies of naive physics and studies of how non-computing and novice computing students view programmers as a different breed. If you have any at hand, I'd love to hear from you.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 26, 2010 7:12 PM

Competition, Imitation, and Nothingness on a Sunday Morning

Joseph Brodsky said:

Another poet who really changed not only my idea of poetry, but also my perception of the world -- which is what it's all about, ya? -- is Tsvetayeva. I personally feel closer to Tsvetayeva -- to her poetics, to her techniques, which I was never capable of. This is an extremely immodest thing to say, but, I always thought, "Can I do the Mandelstam thing?" I thought on several occasions that I succeeded at a kind of pastiche.

But Tsvetayeva. I don't think I ever managed to approximate her voice. She was the only poet -- and if you're a professional that's what's going on in your mind -- with whom I decided not to compete.

Tsvetayeva was one of Brodsky's closest friends in Russia. I should probably read some of her work, though I wonder how well the poems translate into English.

~~~~~~~~~~

absolutely nothing: next 22 miles

When I am out running 22 miles, I have lots of time to think. This morning, I spent a few minutes thinking about Brodsky's quote, in the world of programmers. Every so often I have encountered a master programmer whose work changes my perception of world. I remember coming across several programs by Ward Cunningham's, including his wiki, and being captivated by its combination of simplicity and depth. Years before that, the ParcPlace Smalltalk image held my attention for months as I learned what object-oriented programming really was. That collection of code seemed anonymous at first, but I later learned its history and and became a fan of Ingalls, Maloney, and the team. I am sure this happens to other programmers, too.

Brodsky also talks about his sense of competition with other professional poets. From the article, it's clear that he means not a self-centered or destructive competition. He liked Tsvetayeva deeply, both professionally and personally. The competition he felt is more a call to greatness, an aspiration. He was following the thought, "That is beautiful" with "I can do that" -- or "Can I do that?"

I think programmers feel this all the time, whether they are pros or amateurs. Like artists, many programmers learn by imitating the code they see. These days, the open-source software world gives us so many options! See great code; imitate great code. Find a programmer whose work you admire consistently; imitate the techniques, the style, and, yes, the voice. The key in software, as in art, is finding the right examples to imitate.

Do programmers ever choose not to compete in Brodsky's sense? Maybe, maybe not. There are certainly people whose deep grasp of computer science ideas usually feels beyond my reach. Guy Steele comes to mind. But I think for programmers it's mostly a matter of time. We have to make trade-offs between learning one thing well or another.

~~~~~~~~~~

22 miles is a long run. I usually do only one to two runs that long during my training for a given marathon. Some days I start with the sense of foreboding implied by the image above, but more often the run is just there. Twenty-two miles. Run.

This time the morning was chill, 40 degrees with a bright sun. The temperature had fallen so quickly overnight that the previous day's rain had condensed in the leaves of every tree and bush, ready to fall like a new downpour at the slightest breeze.

This is my last long run before taking on Des Moines in three weeks. It felt neutral and good at the same time. It wasn't a great run, like my 20-miler two weeks ago, but it did what it needed to do: stress my legs and mind to run for about as long as the marathon will be. And I had plenty of time to think through the nothingness.

Now begins my taper, an annual ritual leading to a race. The 52 miles I logged this week will seep into my body for the next ten days or so as it acclimates to the stress. Now, I will pare back my mileage and devote a few more short and medium-sized runs to converting strength into the speed.

~~~~~~~~~~

The quote that opens this entry comes from Joseph Brodsky, The Art of Poetry No. 28, an interview in The Paris Review by Sven Birkerts in December 1979. I like very much to hear writers talk about how they write, about other writers, and about the culture of writing. This long interview repaid me several times for the time I spent reading.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Running, Teaching and Learning

September 14, 2010 9:58 PM

Thinking About Things Your Users Don't Know

Recently, one of the software developers I follow on Twitter posted a link to 10 Things Non-Technical Users Don't Understand About Your Software. It documents the gap the author has noticed between himself as a software developer and the people who use the software he creates. A couple, such as copy and paste and data storage, are so basic that they might surprise new developers. Others, such concurrency and the technical jargon of software, aren't all that surprising, but developers need to keep them in mind when building and documenting their systems. One, the need for back-ups, eludes even for technical users. Unfortunately,

You can mention the need for back-ups in your documentation and/or in the software, but it is unlikely to make much difference. History shows that this is a lesson most people have to learn the hard way (techies included).

Um, yeah.

a polar leaps the chasm between ice floes

As I read this article, I began to think that it would be fun and enlightening to write a series of blog entries on the things that CS majors don't understand about our courses. I could start with ten as a target length, but I'm pretty sure that I can identify even more. As the author of the non-technical users paper points out, the value in such a list is most definitely not to demean the students or users. Rather, it is exceedingly useful for professors to remember that their students are not like them and to keep these differences in mind as they design their courses, create assignments, and talk with the students. Like almost everyone who interacts with people, we can do a better job if we understand our audience!

So, I'll be on the look-out for topics specific to CS students. If you have any suggestions, you know how to reach me.

After I finished reading the article, I looked back at the list and realized that many of these things are themselves things that CS majors don't understand about their courses. Consider especially these:

the jargon you use

It took me several years to understand just how often the jargon I used in class sounded like white noise to my students. I'm under no illusion that I now speak in the clearest vocabulary and that all my students understand what I'm saying as I say it. But I think about this often as I prepare and deliver my lectures, and I think I'm better than I used to be.

they should read the documentation

I'm used to be surprised when, on our student assessments, a student responds to the question "What could I have done better to improve my learning in this course?" with "Read the book". (Even worse, some students say "Buy the book"!) Now, I'm just saddened. I can say only so much in class. Our work in class can only introduce students to the concepts we are learning, not cover them in their entirety. Students simply must read the textbook. In upper-division courses, they may well need to read secondary sources and software documentation, too. But they don't always know that, and we need to help them know it as soon as possible.

Finally, my favorite:

the problem exists between keyboard and chair

Let me sample from the article and substitute students for users:

Unskilled students often don't realize how unskilled they are. Consequently they may blame your course (and lectures and projects and tests) for problems that are of their own making.

For many students, it's just a matter of learning that they need to take responsibility for their own learning. Our K-12 schools often do not prepare them very well for this part of the college experience. Sometimes, professors have to be sensitive in raising this topic with students who don't seem to be getting it on their own. A soft touch can do wonders with some students; with others, polite but direct statements are essential.

The author of this article closes his discussion of this topic with advice that applies quite well in the academic setting:

However, if several people have the same problem then you need to change your product to be a better fit for your users (changing your users to be a better fit to your software unfortunately not being an option for most of us).

You see, sometimes the professor's problem exists between his keyboard and his chair, too!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 31, 2010 7:18 PM

Notes from a Master Designer

Someone tweeted recently about a recent interview with Fred Brooks in Wired magazine. Brooks is one of the giants of our field, so I went straight to the page. I knew that I wanted to write something about the interview as soon as I saw this exchange, which followed up questions about how a 1940s North Carolina schoolboy ended up working with computers:

Wired: When you finally got your hands on a computer in the 1950s, what did you do with it?

Brooks: In our first year of graduate school, a friend and I wrote a program to compose tunes. The only large sample of tunes we had access to was hymns, so we started generating common-meter hymns. They were good enough that we could have palmed them off to any choir.

It never surprises me when I learn that programmers and computer scientists are first drawn to software by a desire to implement creative and intelligent tasks. Brooks was first drawn to computers by a desire to automatic data retrieval, which at the time must have seemed almost as fantastic as composing music. In an Communications of the ACM interview printed sometime last year, Ed Feigenbaum called AI the "manifest destiny" of computer science. I often think he is right. (I hope to write about that interview soon, too.)

But that's not the only great passage in Brooks's short interview. Consider:

Wired: You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?

Brooks: You can learn more from failure than success. In failure you're forced to find out what part did not work. But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.

As an undergrad, I took a two-course sequence in assembly language programming and JCL on an old IBM 370 system. I don't know how much the JCL on that machine had advanced beyond Brooks's worst computer programming language ever devised, if it had at all. But I do know that the JCL course gave me a negative-split learning experience unlike any I had ever had before or have had since. As difficult as that was, I will be forever grateful for Dr. William Brown, a veteran of the IBM 360/370 world, and what he taught me that year.

There are at least two more quotables from Brooks that are worth hanging on my door some day:

Great design does not come from great processes; it comes from great designers.

Hey to Steve Jobs.

The insight most likely to improve my own work came next:

The critical thing about the design process is to identify your scarcest resource.

This one line will keep me on my toes for many projects to come.

If great design comes from great designers, then how can the rest of us work toward the goal of becoming a great designer, or at least a better one?

Design, design, and design; and seek knowledgeable criticism.

Practice, practice, practice. But that probably won't be enough. Seek out criticism from thoughtful programmers, designers, and users. Listen to what they have to say, and use it to improve your practice.

A good start might be to read this interview and Brooks's books.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

August 26, 2010 9:39 PM

Thinking about Software Development and My Compilers Course

Our semester is underway. I've had the pleasure of meeting my compilers course twice and am looking forward to diving into some code next week. As I read these days, I am keenly watching for things I can bring into our project, both the content of defining and interpreting language and the process of student teams writing a compiler. Of course, I end up imposing this viewpoint on whatever I read! Lately, I've been seeing a lot that makes me think about the development process for the semester.

Greg Wilson recently posted three rules for for supervising student programming projects. I think these rules are just as useful for the students as they work on their projects. In a big project course, students need to think about time, technology, and team interaction realistically in a way they. I especially like the rule, "Steady beats smart every time". It gives students hope when things get tough, even if they are smart. More importantly, it encourages them to start and to keep moving. That's the best way to make progress, no matter smart you are. (I gave similar advice during my last offering of compilers.) My most successful project teams in both the compilers course and in our Intelligent Systems course were the once who humbly kept working, one shovel of dirt at a time.

I'd love to help my compiler students develop in an agile way, to the extent they are comfortable. Of course, we don't have time for a full agile development course while learning the intricacies of language translation. In most of our project courses, we teach some project management along side the course content. This means devoting a relatively small amount of time to team and management functions. So I will have to stick to the essential core of agile: short iterations plus continuous feedback. As Hugh Beyer writes:

Everything else is there to make that core work better, faster, or in a more organized way. Throw away everything else if you must but don't trade off this core.

For the last couple of weeks, I have been thinking about ways to decompose the traditional stages of the compiler project (scanning, parsing, semantic analysis, and code generation) into shorter iterations. We can certainly implement the parser in two steps, first writing code to recognize legal programs and then adding code to produce abstract syntax. The students in my most recent offering of the compilers course also suggested splitting the code generation phase of the project into two parts, one for implementing the run-time system and one for producing target code. I like this idea, but we will have to come up with ways to test the separate pieces and get feedback from the earlier piece of our code.

Another way we can increase feedback is to do more in-class code reviews of the students' compilers as they write them. A student from the same previous course offering wrote to me only yesterday, in response to my article on learning from projects in industry, suggesting that reviews of student code would have enhanced his project courses. Too often professors show students only their own code, which has been designed and implemented to be clean and easy to understand. A lot of the most important learning happens in working at the rough edges, encountering problems that make things messy and solving them. Other students' code has to confront and solve the same problems, and reading that code and sharing experiences is a great way to learn.

I'm a big fan of this idea, of course, and have taught several of my courses using a studio style in the past. Now I just need to find a way to bring more of that style into my compilers course.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 24, 2010 4:30 PM

Dreaming, Doing, Perl, and Language Translation

Today, I quoted Larry Wall's 2000 Atlanta Linux Showcase Talk in the first day of my compilers course. In that talk, he gives a great example of using a decompiler to port code -- in this case, from Perl 5 to Perl 6. While re-reading the talk, I remembered something that struck me as wrong when I read it the first time:

["If you can dream it, you can do it"--Walt Disney]

"If you can dream it, you can do it"--Walt Disney. Now this is actually false (massive laughter). I think Walt was confused between necessary and sufficient conditions. If you *don't* dream it, you can't do it; that is certainly accurate.

I don't think so. I think this is false, too. (Laugh now.)

It is possible to do things you don't dream of doing first. You certainly have to be open to doing things. Sometimes we dream something, set out to do it, and end up doing something else. The history of science and engineering are full of accidents and incidental results.

I once was tempted to say, "If you don't start it, you can't do it; that is certainly accurate." But I'm not sure that's true either, because of the first "it". These days, I'm more inclined to say that if you don't start doing something, you probably won't do anything.

Back to Day 1 of the compilers: I do love this course. The Perl quote in my lecture notes is but one element in a campaign to convince my students that this isn't just a compilers course. The value in the course material and in the project itself go far beyond the creation of an old-style source language-to-machine language translator. Decompilers, refactoring browsers, cross-compilers, preprocessors, interpreters, and translators for all sorts of domain-specific languages -- a compilers course will help you learn about all of these tools, both how they work and how to build them. Besides, there aren't many better ways to consolidate your understanding of the breadth of computer science than to build a compiler.

The official title of my department's course is "Translation of Programming Languages". Back in 1994, before the rebirth of mainstream language experimentation and the growth of interest in scripting languages and domain-specific languages, this seemed like a daring step. These days, the title seems much more fitting than "Compiler Construction". Perhaps my friend and former colleague Mahmoud Pegah and I had a rare moment of foresight. More likely, Mahmoud had the insight, and I was simply wise enough to follow.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

August 13, 2010 8:42 PM

Learning from Projects in Industry Training

Former student and current ThoughtWorker Chris Turner sent me an article on ThoughtWorks University's new project-based training course. I mentioned Chris once before, soon after he joined ThoughtWorks. (I also mentioned his very cool research on "zoomable" user interfaces, still one of my all-time favorite undergrad projects.)

Chris tool one of my early offerings of agile software development, one that tried to mix traditional in-class activities with a "studio approach" to a large team project. My most recent offering of the course turned the knobs a bit higher, with two weeks of lecture and pair learning exercises followed by two weeks of intensive project. I really like the results of the new course but wonder how I might be able to do the same kind of thing during the regular semester, when students take five courses and typically spend only three hours a week in class over fifteen weeks.

The ThoughtWorks U. folks do not work under such constraints and have even more focused time available than my one-month course. They bring students in for six weeks of full-time work. Not surprisingly they came to question the effectiveness of their old approach: five weeks of lecture and learning activities followed by a one-week simulation of a project. Most of the learning, it seemed, happened in context during the week-long project. Maybe they should expand the project? But... there is so much content to teach!

Eventually they asked themselves the $64,000 Question:

"What if we don't teach this at all? What's the worst that can happen?"

I love this question. When trying to balance practice in context with yet another lecture, university professors should ask this question about each element of the courses they teach. Often the answer is that students will have to learn the concept from their experience on real projects. Maybe students need more experience on real projects, not more lecture and more homework problems from the back of the textbook chapter!

The folks at TWU redesigned their training program for developers to consist of two weeks of training and four weeks of project work. And they -- and their students -- seem pleased with the results.

... information in context trumped instruction out of context in a huge way. The project was an environment for students to fail in safety. Failure created the need for people to learn and a catalyst for us to coach and teach. A real project environment also allowed students to learn to learn.

This echoes my own experience and is one of the big reasons I think so much about project-based courses. Students still need to learn ideas and concepts, and some will need more direct individual assistance to pick them. The ThoughtWorks folks addressed that need upfront:

We also created several pieces of elearning to help students gain some basic skills when they needed them. Coupled with a social learning platform and a 6:1 student-coach ratio, we were looking at a program that focussed heavily on individualisation as against an experience that was one-size-fits-all-but-fits-nobody. Even with the elearning, we ensured that we were pragmatic in partnering with external content providers whose content met our quality standards.

This is a crucial step, and one that I would like to improve before I teach my agile course again. I found lots of links to on-line resources students could use to learn about agile and XP, but I need to create better materials in some areas and create materials to fill gaps in the accessible web literature. If I want to emphasize the project in my compiler course even more, I will need to create a lot of new materials. What I'd really like to do is create active e-learning resources, rather than text to read. The volume, variety, and quality of supporting materials is even more important if we want to make projects the central activity in courses for beginners.

By the way, I also love the phrase "one-size-fits-all-but-fits-nobody".

When faculty who teach more traditional courses in more traditional curricula hear stories such as this one from TWU, they always ask me the same question: How much does the success of such an industry training program depend on "basic knowledge" students learned in traditional courses? I wonder the same thing. Could we start CS1 or CS2 with two weeks of regular classes followed by four weeks of project? What would work and what wouldn't? Could we address the weaknesses to make the idea work? If we could, student motivation might reach a level higher than we see now. Even better, student learning might be improved as they encounter ideas as they need them to solve problems that matter. (For an opinion to the contrary, see Moti Ben-Ari's comments as reported by Mark Guzdial.)

School starts in a week, so my thoughts have turned to my compiler course. This course already based on one of the classic project experiences that CS students can have. There is a tendency to think all is well with the basic structure of the course and that we should leave it alone. That's not really my style. Having taught compilers any times, I know my course's strengths and weaknesses and know that it can be improved. The extent to which I change it is always a open question.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 02, 2010 1:54 PM

Computing and Education in My News

My newreader and inbox are full of recent articles about computing and education in the news. First, there is a New York Times Technology section piece on Scott McNealy's open-source textbook project, Currwiki. When I first read this, I thought for sure I would blog on the idea of free and open-source textbooks today. The more I thought about it, and especially the more I tried to write about it, the less I found I have to say right now. Mark Guzdial has already responded with a few concerns he has about open-source textbooks. Guzdial conflates "open source" with "free", as does the Times piece, though McNealy's project seems to be mostly about offering low-cost or free alternatives to increasingly expensive school books. Most of Guzdial's concerns echo the red flags people have raised about free and open-source software in the past, and we see the effect FOSS has had in the world.

Maybe I'll have something cogent to say some other day, but for now, all I can think is, "Is that a MacBookPro in the photo of McNealy and his son?" If so, even well-placed pencil holder can't hide the truth!

Then there is a blog entry at Education Week on the Computer Science Education Act, a bill introduced in the U.S. House of Representatives last week aimed at improve the state of K-12 CS education. Again, any initial excitement to write at length on this topic faded as I thought more about it. This sort of bill is introduced all the time in Congress with little or no future, so until I see this one receive serious attention from House leaders I'll think of it as mostly good PR for computer science. I do not generally think that legislation of this kind has a huge effect on practice in the schools, which are much too complicated to be pushed off course by a few exploratory grants or a new commission. That said, it's nice that a few higher-ups in education might think deeply about the role CS might and could play in 21st-century K-12 education. This ain't 1910, folks.

Finally, here's one that I can blog about with excitement and a little pride: One of my students, Nick Cash, has been named one of five finalists in Entrepreneur Magazine's Entrepreneur of 2010 contest. Nick is one of those bright guys for who our education system is a poor fit, because he is thinking bigger thoughts than "when is the next problem set due?" He has been keeping me apprised of his start-up every so often, but things change so fast that it is hard for me to keep up.

One of the things that makes me proud is the company he is keeping in that final five. Maryland and Michigan are big-time universities with big-time business schools. Though you may not have heard of Babson College, it has long had one of the top-ranking undergraduate entrepreneurship programs in the country. (I'm that in part because I double-majored in accounting at Ball State University, which also has a top-ranked entrepreneurship center for undergrads.) UNI has been doing more to support student entrepreneurship over the last few years, including an incubator for launching start-ups. Still, Nick has made it to the finals against students who come from better-funded and better-known programs. That says even more about his accomplishment.

Nick's company, Book Hatchery, is certainly highly relevant in today's digital publishing market. I'll be wishing him well in the coming years and helping in any way he asks. Check out the link above and, if you are so inclined, cast a vote for his start-up in the contest!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 28, 2010 2:26 PM

Sending Bad Signals about Recursion

A couple of weeks ago there was a sustained thread on the SIGCSE mailing list on the topic of teaching recursion. Many people expressed strongly held opinions, though most of those were not applicable outside the poster's own context. Only a few of the views expressed were supported by educational research.

My biggest take-away from the discussion was this: I can understand why CS students at so many schools have a bad impression of recursion. Like so many students I encounter, the faculty who responded expressed admiration for the "beauty and elegance" of recursion but seemed to misunderstand at a fundamental level how to use it as a practical tool.

The discussion took a brief side excursion into the merits of Towers of Hanoi as a useful example for teaching recursion to students in the first year. It is simple and easy for students to understand, said the proponents, so that makes it useful. In my early years of teaching CS1/CS2, I used Towers as an example, but long ago I came to believe that real problems are more compelling and provide richer context for learning. (My good friend Owen Astrachan has been sounding the clarion call on this topic for a few years now, including a direct dig on the Towers of Hanoi!)

My concern with the Towers is more specific when we talk about recursion. One poster remarked that this problem helped students to see that recursion can be slow:

Recursion *is* slow if you're solving a problem that is exponentially hard like Hanoi. You can't solve it faster than the recursive solution, so I think Hanoi is a perfectly fine example of recursion.

M. C. Escher's 'Drawing Hands'

This, I think, is one of the attitudes that gives our students an unduly bad impression of recursion, because it confounds problem and solution. Most students leave their first-year courses thinking that recursion is slow and computationally expensive. This is in part an effect of the kinds of problems we solve with recursion there. The first examples we show students of loops tend not to solve exponentially hard problems. This leads students to infer that loops are fast and recursion is slow, when the computational complexity was a feature of the problems, not the solutions. A loop-based solution to Towers would be slow and use a lot of space, too! We can always tell our students about the distinction, but they see so few examples of recursion that they are surely left with a misimpression, through no fault of their own.

Another poster commented that he had once been a consultant on a project at a nuclear reactor. One of the programmers proudly showed one of their programs that used recursion to solve one of their problems. By using recursion, they had been able to construct a straightforward inductive proof of the code's correctness. The poster chided the programmer, because the code was able to overflow the run-time stack and fail during execution. He encouraged them to re-write the code using a subset of looping constructs that enables proofs over the limited set of programs it generates. Recursion cannot be used in real-time systems, he asserted, for just this reason.

Now, I don't want run-time errors in the code that runs our nuclear reactors or any other real-time system, for that matter, but that conclusion is a long jump from the data. I wrote to this faculty member off-list and asked whether the programming language in question forbids, allows, or requires the compiler to optimize recursive code or, more specifically, tail calls. With tail call optimization, a compiler can convert a large class of recursive functions to a non-recursive run-time implementation. This means that the programmer could have both a convincing inductive proof of the code's correctness and a guarantee that the run-time stack will never grow beyond the initial stack frame.

The answer was, yes, this is allowed, and the standard compilers provide this as an option. But he wasn't interested in discussing the idea further. Recursion is not suitable for real-time systems, and that's that.

It's hard to imagine students developing a deep appreciation for recursion when their teachers believe that recursion is inappropriate independent of any evidence otherwise. Recursion has strengths and weaknesses, but the only strengths most students seem to learn about are its beauty and its elegance. Those are code words in many students' minds for "impractical" and, when combined with a teacher's general attitude toward the technique, surely limit our students' ability to get recursion.

I'm not claiming that it's easy for students to learn recursion, especially in the first year, when we tend to work with data that make it hard to see when recursion really helps. But it's certainly possible to help students move from naive recursive solutions to uses of an accumulator variable that enable tail-recursive implementations. Whether that is a worthwhile endeavor in the first year, given everything else we want to accomplish there, is the real question. It is also the question that underlay the SIGCSE thread. But we need to make sure that our students understand recursion and know how to use it effectively in code before they graduate. It's too powerful a tool to be missing from their skill set when they enter the workforce.

As I opened this entry, though, I left the discussion not very hopeful. The general attitude of many instructors may well get in the way of achieving that goal. When confronted with evidence that one of their beliefs is a misconception, too many of them shrugged their shoulders or actively disputed the evidence. The facts interfered with what they already know to be true!

There is hope, though. One of Ursula Wolz's messages was my favorite part of the conversation. She described a study she conducted in grad school teaching recursion to middle-schoolers using simple turtle graphics. From the results of that study and her anecdotal experiences teaching recursion to undergrads, she concluded:

Recursion is not HARD. Recursion is accessible when good models of abstraction are present, students are engaged and the teacher has a broad rather than narrow agenda.

Two important ideas stand out of this quote for me. First, students need to have access to good models of abstraction. I think this can be aided by using problems that are rich enough to support abstractions our students can comprehend. Second, the teacher must have a broad agenda, not a narrow one. To me, this agenda includes not only the educational goals for the lesson but also general message that we want to send our students. Even young learners are pretty\ good at sensing what we think about the material we are teaching. If we convey to them that recursion is beautiful, elegant, hard, and not useful, then that's what they will learn.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 26, 2010 3:28 PM

Digital Cameras, Electric Guitars, and Programming

Eric Clapton at the Crossroads Festival 2010

I often write here about programming for everyone, or at least for certain users. Won't that put professional programmers out of business? Here is a great answer to a related question from a screenwriter using an analogy to music:

Well, if I gave you an electric guitar, would you instantly become Eric Clapton?

There is always room for specialists and artists, people who take literacy to a higher level. We all can write a little bit, and pretty soon everyone will be blogging, tweeting, or Facebooking, but we still need Shakespeare, James Joyce, and Kurt Vonnegut. There is more to writing than letters and sentences. There is more to programming than tokens and procedures. A person with ideas can create things we want to read and use.

Sometimes the idea is as simple as hooking up two existing ideas. I may be late to the party, but @bc_l is simply too cool:

I'm GNU bc on twitter! DM me your math and I'll tell you the answer. (by @hmason)

@hmason is awesome.

On a more practical note, I use dc as the target language for a simple demo compiler in my compilers course, following the lead of Fischer, Cytron, and LeBlanc in Crafting a Compiler. I'm considering using the new edition of this text in my course this fall, in part because of its support for virtual machines as targets and especially the JVM. I like where my course has been the last couple of offerings, but this seems like an overdue change in my course's content. I may as well start moving the course. Eventually, targeting multi-core architectures will be essential.

If I want to help students who dream of being Eric Clapton with a keyboard, I gotta keep moving.

~~~~

(The image above is courtesy of PedalFreak at flickr, with a Attribution-NoDerivs 2.0 Generic license.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 30, 2010 5:17 PM

Changing Default Actions

Learning to do test-driven design requires a big shift in mindset for many developers. I was impressed with how well the students in my recent agile development course took to the idea of writing tests first. Even the most skeptical students seemed willing to go along with the group in using tests to specify the code they needed to write. Other agile practices, such as pair programming and communal development, helped to support all of the students, willing or skeptical, to move in the right direction.

My friend Steve Berczuk suggests another way to support the change in habit:

Rather than frame the testing challenge with the default being the old way of not testing:

Write a test when it makes sense.

Change your perspective to the default being to test:

Write a test unless you can explain why you did not.

I like how Steve shifts the focus onto default actions. The actions we take by default arise when our mental habits come into contact with the world. Some of my students prefer to talk about their "instincts", but the principle is the same: When things get hard -- or easy -- what will you do?

We can change our habits. We can develop our instincts. Yes, it is hard to do. However we make the change, we have to change the individual actions we take at each moment of choice.

The way to turn running into a habit is to run. When I have a run planned for a morning but wake up feeling rotten, my default has to be to run. I need to have a really good reason not to run, a reason I am willing to tell my family and running friends without shame. This is another example of using positive peer pressure to help myself act in a desired way.

There are good reasons not to run some days. However, when I am creating a new habit, I have to place the burden of proof on the old habit: Why not?

When I follow this discipline, there is a risk of overusing the technique I am learning. If my default answer is to just keep running, I will run on some mornings when I really should take a break. I may find out during the run, in which case I need to listen to my body immediately and adapt. Or I may find out later, when I see that my times from the workout were substandard or when I am sore or fatigued beyond reason later in the day. Whenever I recognize the problem, I can examine the outcome and try to learn the reason why I should not have run. This will allow me to make a sound exception to my default in the future.

The same risk comes when we try this technique while learning test-driven design or any new programming practice. I may write a test I don't have to write. I may write code that is too simple. I may need it after all. This risk is an integral part of learning. I must learn when not to do something just as much as I need to learn when to do it. The risk of running when I ought not run carries a greater potential cost than writing a test when I need not write, because physical injury may result. The only real cost of writing an unnecessary test or taking too small a step forward in my code is the time lost.

As a runner, the way I minimize the risk of injury or other significant cost I have to listen to my body. As a programmer, I still also have to listen to my code, and keep it clean through refactoring. Done steadily and faithfully, the side effect is a new habit, better instincts.

The key to Steve's suggestion is that changing practice isn't just about habit and instinct, as important as they are. It's also about attitude. There are times when my surface attitude is compliant with a picking up a new practice, but my ingrained attitude gets in the way. My conscious mind may say, "I want to learn how to do TDD", while subconsciously I react as if "I don't need to write a test here". Taking the initiative to change my default action consciously helps me to bridge the gap. I think that's why I find Steve's idea so helpful.


Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

June 12, 2010 11:09 AM

Readings from the Agile Development Course

Last time I mentioned that students in my agile software development course found several of the reading assignments to be valuable. For what it's worth, here is at least a relatively complete list of the readings I assigned in the course of four weeks, in no particular order.

When I still thought we might use a distributed version control system, I asked students to read Hg Init and then a few items on git, including Git for the Lazy, the official git tutorial man page, and Everyday Git. Then, when I decided to show discretion in at least one part of the project and use centralized version control, I asked the class to read several items on Subversion

I'm under no illusion that every student read every page of every reading. This is an ambitious list for agile beginners to tackle in four weeks while also working on a big project. Still, it's clear from discussion that many students read a lot of this material, had questions, and even had comments for me.

As always, I'd love to hear from you any comments you have about this list, or any additions you can suggest for future offerings.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 11, 2010 7:31 PM

Some Feedback on Agile Development Course

Today I finished reviewing materials from my agile software development course so that I could assign final grades. The last thing that students wrote for me was a short evaluation of the course and some of the things we did. Their comments are definitely valuable to me as I think about offering the course again.

What worked best this semester? I was surprised that the class was nearly unanimous in saying pair programming. Their reasons were varied but consistent. By pairing, they faced less down time because one of the two developers usually had an idea for how to proceed. Talking about the roadblock helped them get around it. Several students commented that programming was more fun when working with someone. One student said that he "learned how to explain things better". I imagine that part of this was that, as the team came to build up trust and respect for each other, he wanted to be more patient and helpful when responding to questions. Another part was probably simply practice; pair programming increases communication bandwidth.

The students who did not answer "pair programming" said "learning Ruby". I was hopeful that students would enjoy Ruby and pick it up quickly. But my hope was not grounded in much empirical evidence, so I was a little worried. The language was no worse than a break even proposition for some students, and it was a big win for most. And as a result I had more fun!

I asked students, Who were the best partners you worked with? The best part of these answers is that they were not dominated by a few names, as I thought they might be. There was a pretty good spread of names listed, with only a couple of students mentioned repeatedly. I take this to mean that many students contributed relatively well to the experience of their teammates, which is the sort of horizontal distribution of knowledge that is desired for XP teams. I did note an interesting distinction made by one student between the partner I learned the most from and the partner with whom I felt the most productive.

Which of the assigned readings was most valuable? I include a list of most of the readings I assigned over the course of the four weeks. Of those, two were identified most frequently by students as being valuable: Bill Wake's The Test-First Stoplight, which was assigned early in the semester to give students a sense of the rhythm of test-driven design before diving in as a team, and What is Software Design? by Jack Reeves, which was assigned late in the semester after the team had worked collaboratively for a couple of weeks on a system that had no upfront design and very little documentation. In class discussion, a couple of students disagreed with a few of Reeves's points, but even in those cases the paper engaged and challenged the reader. That's about all I can ask from a paper.

When asked, What did you learn best in the courses?, the answers fell into roughly two groups: TDD and the value of tests and Ruby and OOP. The flip side is that many students also said, "I wish I could have learned more!" Again, that's about all I can ask from any course, that it leave students both happy to have learned something and eager to learn more.

I don't mind that Ruby and object-oriented programming were the prized learning outcomes of a course on agile software development. A couple of times during the course I noted that a good project course is by its nature about everything. We can design courses in neat little bundles, but the work of making anything -- and perhaps especially software -- is a tangled weave of knowledge and discipline and habit that comes together in the act of creating something real. If Ruby and OOP were what some students most needed to learn in May, I'm glad that's what they learned. I will trust that the agile ideas will take root when they are most Needed.

How could I improve the course? This is always a tough question to ask students in an named setting, because no matter how much they might trust me I know that some will be reluctant to say what they really think. Still, I received some answers that will help me design the next iteration of this course. Several times expressed a desire for more time -- to write tests, to pair program, to work on the system outside of class, to practice refactoring, .... It seems that desire for more time is a constant in human experience. (At least they weren't so tired of the course that they all said, "Good riddance!"!) Clearly, there is a trade-off between a four-week semester focused on one course and a fifteen-week semester given over to four or five courses. The loss of absolute number of hours available is a cost of the former. I'll have to think about whether that cost is outweighed by its benefits.

One of the more mature and experienced team members offered an interesting comment: Having an instructor who is a programmer act as client and explain the project requirements to the developers affected a lot of things. He wondered what it would be like to have a non-technical client with the CS instructor acting solely as agile coach. An insightful observation and question!

All in all, this was valuable feedback. The students came through again, as they did throughout the course.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 10, 2010 4:42 PM

"Damn It, Jim. I'm a Teacher"

If you are a university professor, read this short piece by John Cook and try not to think about teaching and educational research! It doesn't take long for a CS professor interested in thinking about his teaching methods scientifically to bump into the seeming double standard that Mosteller identifies between practicing doctors and medical researchers. Wanna teach object-oriented programming beginning on Day 1 of CS 1? No problem. Wanna collect some data and compare the results to results from a control group? Hold on there... You'll need to fill out a few forms, get permission from a review board, and make sure your research methodology satisfies the scrutiny of colleagues from across campus.

I understand Cook's reasoning that review boards add necessary value to the enterprise of medical research, and I suppose they add some of the same value to educational research. But many of the things we try in our classrooms are more experimental than routine. Even with grounding in educational psychology, a lot of our teaching is done under conditions we don't understand completely and with techniques that are outside the range of past education research.

Formally vetting education research does achieve one worthwhile goal: It reduces the chances that a poorly conceived experiment can be used to draw broad conclusions beyond its reach. On the other end of the spectrum, though, it likely reduces the number of formal experiments that are done, which contributes to a culture of sharing anecdotal results and "Look what I did..." experience reports. When that happens, you have to study the community closely to learn whose results to pay attention to and learn from. Vetting informal experiments becomes an informal process.

After going through a couple of rounds of trying to get teaching experiments approved early in my career, I have even greater admiration and appreciation for the work people like Mark Guzdial do. We educators need educators to do formal experiments and to report the results. We also need to learn as much about psychology (of all sorts, including behavioral), biology, and even sociology if we hope to teach more effectively, more reliably.

I promise I won't try to learn anything when I teach compilers this fall. If I do, though, I may tell people about it.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 04, 2010 4:38 PM

The End Comes Quickly

Teaching a 3-credit semester course in one month feels like running on a treadmill that speeds up every morning but never stops. Then the course is over, almost without warning. It reminds me a little bit of how the Christmas season felt to me when I was a child. (I should be honest with myself. That's still how the Christmas season feels to me, because I am still a kid.)

I've blogged about the course only twice since we dove head-long into software development, on TDD and incremental design and on the rare pleasure of pair programming with students. Today, we wrapped up the second of two week-long iterations. Here are the numbers, after we originally estimated 60 units of work for the ideal week:

  • Iteration 1
    • Budgeted work for week: 30 units.
    • Actual work for week: 29.5 units.
    • Story points delivered: 24 units.

  • Iteration 2 (holiday-shortened)
    • Budgeted work: 16 units.
    • Actual work: 18 units.
    • Story points delivered: 11 units.

I was reasonably happy with the amount of software the team was able to deliver, given all the factors at play, among them never having done XP beyond single practices in small exercises, never having worked together before, learning a new programming language, work schedules that hampered the developers' ability to pair program outside our scheduled class time, and working a domain far beyond most of their experiences.

And that doesn't even mention what was perhaps the team's biggest obstacle: me. I had never coached an undergrad team in such an intense, focused setting before. In three weeks, I learned a lot about how to write better stories for a student team and how to coach students better when they ran into problems in the trenches. I hope that I would do a much better job as coach if we were to start working on a second release on Monday. As my good friend Joe Bergin told me in e-mail today, "Just being agile."

We did short retrospectives at the end of both iterations, with the second melting into a retrospective on the project as a whole. In general, the students seemed satisfied with the progress they made in each iteration, even when they still felt uncomfortable with some of the P practices (or remained downright skeptical). Most thought that the foundation practices -- story selection, pair programming, test-first programming, and continuous integration -- worked well in both iterations.

When asked, "What could we improve?", many students gave the same answers, because they recognized we were all still learning. After the first iteration, several team members were still uncomfortable with no Big Design Up Front (BDUF), and a couple thought that we might have avoided needing to devote a day to refactoring if only we had done more design at the beginning. I was skeptical, though, and said so. If we had tried to design the system at the beginning of the project, knowing what we knew then, would we have had as good and as complete a system as we had at the end of the iteration? No way. We learned a lot building our first week's system, and it prepared us for the design we did while refactoring. I could be wrong, but I don't think so.

Most of the developers agreed that the team could be more productive if they were not required to do all of their programming in pairs. With a little guidance from me as the coach, the team decided to loosen the restriction on pairing as follows:

  • If a pair completes a story together, one member of the pair was permitted to work alone to refactor the code they worked on. Honor code: the solo programmer would not create new code, only refactor, and the solo programmer would not make changes to the code that stayed very far from what the pair understood while working together.

  • If a pair is nearly finished with a story, one member of the pair was permitted to work alone to quickly wrap up the story. Honor system: the programmer would work for only 15-30 minutes alone; if it became clear that the work remaining was more involved than a quick wrap-up, the solo programmer stop immediately and resume working with a partner at the next opportunity.

  • A team member may experiment alone, doing a quick spike in an effort to understand some part of the system. Honor system: the solo programmer would commit none of the spike's code; upon returning to the studio, the developer would collaborate on an equal basis within the pair, sharing what was learned via the spike but not ramming it through to the mainline system without agreement and understanding from the partner.

  • At the next daily stand-up, any developer who had worked solo since the previous class session would explain all work done solo to the entire team.

After the second iteration, the team was happy with this adaptation. Only three of the ten developers had worked alone during the iteration, all doing work well within the letter of the new rules and, even more important, well within the spirit of the new rules, too. The rest of the team was happy with the story wrap-up, the refactoring, and the experimentation that had been done. This seemed like a nice win for the group, as it was the one chance to adapt the XP practices to its own needs, and the modification worked well for them. Just being agile!

As part of the course retrospective, I asked the students whether they would have preferred working in a domain they understood better, perhaps allowing them to focus better on the new practices and new programming language. Here are the notes I had made for myself before class, to be shared after they had a chance to answer:

My thoughts on the domain:

I think it was essential that we work in a domain that pushed you out of your comfort zone.

  • It is hard enough to break habits at all, let alone working on problem you already understand well -- or think you do.
  • The benefits of agile approaches come in helping the team to learn and to incorporate that learning into the system.
  • Not knowing the domain forced you to ask lots of questions. That's how real projects work. That's also the best way to work on any system, even ones you think you already understand.
  • There is a realness to reality. Choices matter. When the user is a real person and is passionate about the product, choices matter.

I was so impressed with the answers the students gave. They covered nearly all of my points, sometimes better than I did. One student identified the trade-off between working in familiar and unfamiliar domains. Another student pointed out that not knowing the domain made the team slow down and think, which helped them design better tests and code. Yet another remarked that there probably was no domain that they all knew equally well anyway. The comment that struck me as most insightful was, roughly, "If we worked in a domain we all understand, then we would probably all understand it differently." That captures the problem of requirements analysis as well as anything I've ever read in a software engineering textbook.

It occurred to me while writing this that I should share the list of readings I asked students to study. It's nothing special, papers most people know about about, but it might be a subset of all possible readings that others might find useful. Sharing the list will also make it possible for you to help me make the it better for the next time I offer the course! I'll gather up all of my links and post the list soon.

A few days ago, alumnus Wade Arnold tweeted:

Sad to see May graduates in Computer Science applying to do website design and updates. Seriously where did the art of programming go?

The small and largely disconnected programming problems that we assign students in most courses may engage some students in the joy of programming, but I suspect that these problems do not engage enough students deeply enough. I remain convinced that courses like this one, with a real problem explored more deeply and more broadly, with the student developers more in control of what they do and how, is one of the few things we can do in a traditional university setting to help students grok what software development is all about -- and why it can satisfy in ways that other activities often cannot.

Next up on the teaching front: the compilers course. But first, I turn my thoughts to sustainable pace and look forward to breathing free for a few days.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 24, 2010 8:31 PM

Teaching TDD and Incremental Design

Today in the lab while developing code, we encountered another example of stories colliding. We are early in the project, and most parts of the system are still inchoate. So collisions are to be expected.

Today, two pairs were working on stories involving Account objects, which at the start of the day knew only their account number, account name, and current balance. In both stories, the code would have to presume a more knowledgable object, one that knows something about the date of on which the balance is in effect and the various entries that have modified the balance over time.

One team asked, "How can we proceed without knowing the result of the other story?" More importantly, how can either team proceed without knowing how journal transactions will be recorded as entries to the accounts? Implicit in the question, sometimes, is a suggestion disguised as a question: Isn't this an example of where we should do a little up-front design?

In a professional setting, a little up-front design might be the right answer. But with newcomers to XP and TDD, I am trying to have us all think in as pure an XP way as possible. Finding the right point on the continuum between too little up-front design and too much is something better done once the developer has more experience with both ways of working.

This situation is actually a perfect place for us to reinforce the idea behind TDD and why it can help us write better software. Whatever the two pairs do right now, there will likely be some conflicts that need to be merged. Taking that as a given, how can the pairs proceed best? As they write their tests, each should ask itself,

What is the simplest interface we can possibly use to implement this story?

When we write a test, we design a little part of our system's internal interface. Students are used to knowing everything about an already-designed object when they write code to use the object. Programming test-first forces us to think about the interface first, without being privy to implementation. This is good, as it will encourage us to design components that are as loosely coupled as possible. Stories we implement later will impose more specific details on how the object behaves, and we can handle more detailed implementation issues then. This is good, as it encourages us (1) to write simple tests that do not presume any more about the object than is required, and (2) to do the simplest thing that could possibly work to implement the new behavior, because those later stories may well cause our implementation to be extended or changed altogether.

Our story collision is both an obstacle of sorts and an opportunity to let our tests drive us forward in small steps!

This collision also has another lesson in store for us. The whole team has been avoiding a couple of stories about closing journals at the end of the month. Implementing these stories will teach us a lot about what accounts know and look like. By avoiding them, the team has made implementing some of our simplest stories more contingent than they need to be.

Over the weekend, Kent Beck tweeted:

got stuck. wrote a test. unstuck.

A bit he later he followed up:

i get stuck trying to write all the logic at once. feels great to deliberately ignore cases that had me stumped. "that's another test"

This is a skill I hope my students can develop this month. In order for that to happen, I need to watch for opportunities to point them in the right direction. When a pair is at an impasse, unsure of what to do next, I need to suggest that they step back and try to take a smaller step. Write a test for that something smaller, and see where that leads them. The Account saga is a useful example for me to keep in mind.

If nothing else, teaching this course in real time -- in a lab with students testing, designing, and coding all the time -- makes clear something we all know. It is one thing to be able to do something, to react and act in response to the world. It is another thing all together to teach or coach others to do the same thing. I have to have ready at hand questions to ask and suggestions to make as students encounter situations that I handle subconsciously through ingrained experience. Teaching this course is fun on a lot of levels.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 19, 2010 4:39 PM

TDD Exploration on an Agile Student Project

Man, I having fun teaching my agile course. Writing code is fun, and talking design and technique with students in real time is fun. Other than being so intense as to tire me out every day, I think I could get used to this course-in-a-month model.

I had expected that this week would be our first iteration, but it became clear early on that the student team did not understand the domain of our project -- a simple home accounting system I might use -- well enough to begin a development iteration. Progress would have been too slow, and integration too halting, to make the time well-spent.

So, in agile fashion, we adapted. We began our time together yesterday by discussing the problem a bit more at the one-page story level. The team was having difficulty with idea of special journals and different kinds of transactions. We collectively decided to focus on the general journal for recording all transactions, and so adjusted our thinking and our stories.

Then we took inspiration from XP's practice of a spike solution. In XP, a spike is a simple program that helps a team to explore a thorny technical or design problem and learn enough to begin working on live code. As Ward Cunningham relates, a spike is the answer to the question, "What is the simplest thing we can program that will convince us we are on the right track?" For our team, the problem wasn't technical or design-related; it was a purely a matter of insufficient domain understanding.

We paired up, took a simple story, wrote a test, and wrote code. The story was:

Record a check written on May 15 for cash, in the amount of $100.

By writing even one test for this story, the team began to learn new things about the system to be built. For example,

  • A transaction is atomic. You can't add a debit to a journal independent of its balancing credit.

  • Recording a transaction does not update any account. That happens when the journal is closed at the end of the period.

  • There is a difference between the model, the core computation and data of a program, and the view, the way users see or experience the program's behavior. We can and usually should think about these parts of our program as independent.

The first two of these lessons are about the domain. The third is about software design. Both kinds of lesson are essential ones for a young team to to learn, or be reminded of, before building the system.

Some pairs explored this story and its implications for an hour or more. Others tried to forge ahead further, with a second story:

Record receipt of a $200 paycheck, with $100 going to my checking account, $20 to my prepaid medical expense account, and $80 to income tax withholding.

Again, these teams learned something: A transaction may consist of multiple debits or credits. This also means that a transaction must be able to record multiple amounts, unlike in the first story, because several debits may total up to the value of single credit. Finally, if there are multiple debits, the sum of their values must total exactly to the value of the single credit.

Each little bit of learning will help the team to begin to code productively and to be prepared to grow the design of the system.

The development team was not the only party who learned a lot with this spike. By watching the pairs implement a story or two, offering advice and answering questions, I did, too. I play two roles on this project, both as a teacher of sorts. I am the customer for the product and fully intend to use it when the course ends. This makes me a teacher of the domain, both specifically this program and generally Accounting 101. I am also the coach, which finds me helping to guide the XP process as well as teaching students a bit about Ruby, software design, and OO.

By collaborating with the development team as they wrote spike-like code, I learned a lot about how to write better stories. This is true of me as customer, who realized that my original stories lacked the concrete focus the team needed to be able to focus on essential features of the program. It is also true of me as coach, who realized that certain stories would be especially useful in helping the team arrive at a more valuable design more quickly.

It is more than okay for the coach and the customer to learn as much from working with the team as the team members themselves; it is expected. That's one of the great attractions and one of the great assets of agile software development. My students are getting to see that early in their experience with XP.

Tomorrow, I think we will shake of our experimental mindset and begin to write our program. I don't have a lot of experience with a team starting a brand-new system from Code Zero in XP; most of my greenfield development has been on projects where I am the only programmer or the only one writing the initial code base. I am considering trying out advice that @jamesshore tweeted last week:

When starting brand-new product, codebase too small for 8 programmers to work separately. Instead, use projector. 1 driver, 7 nav

This seems like a great way to build an initial code base around a set of commonly-understood objects and interfaces. If I try it out, I will need to avoid one temptation. I will surely want to drive, but I should let a student -- a member of the development team -- control the keyboard!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 17, 2010 5:19 PM

Course Notes from the Treadmill

Teaching a course two hours a day , especially a course that is essentially new in this incarnation, feels like running on ice. I'm enjoying every day, but tomorrow becomes today way too fast!

At the end of last week, I began to feel the effects of compressing a 3-credit course into four weeks. At the end of Week 1, we are a quarter of the way through the course. But one week is not really enough time for all these new ideas to soak into a student's brain or fingertips. TDD, refactoring, pairing, .... Ruby, an IDE, a VCS, ... Our brains take time to adjust. The students are doing remarkably well under the conditions, but some of them are feeling the rush of days, too.

I most noticed the compression in my conflicting desires to do stuff and to talk more about stuff before doing anything big. Most professors tend to err on the side of talking more, but that isn't the best way to learn most disciplines. I decided that we had seen enough background on XP and that students had practiced enough on small exercises such as the spreadsheet TDD challenge and refactoring Fowler's code, Ruby style. It was time to start building software, and learn as we go. So today we played the Planning Game and put ourselves in position to write Line 1 of code tomorrow.

It's been interesting talking to students about XP's practices. Pairing seemed odd to many of them at first, but they seem to have taken to it quickly. They are social beings. Refactoring seems like the Right Thing To Do to many of them, but in practice it is hard. Using a tool like Reek to identify some smells and an IDE like RubyMine to perform some of the refactoring will help, but RubyMine does not yet implement enough different refactorings to really dampen their fear of breaking code.

TDD is causing a couple of programmers fits, because it inverts how they think about coding. When it comes time to write tests for the app they are building -- no longer a small exercise in their minds -- I expect us to struggle as we think about simple design steps. I hope, though, that this practice will get them over the hump to see how writing tests early or first can really affect how we think about our code.

I am still surprised when developers bemoan their inability to deliver working code and then balk so mightily at a practice that could help them in a major way. But, as we all know, old habits die hard. When the mind is ready, change can happen. All we can hope for in a course is to try to be in position for change to occur whenever the mind becomes ready.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 12, 2010 7:41 AM

A Morning in the Life

My mailbox this morning contained a long complaint from a student about a grade, a request to meet with a faculty member about an issue he didn't handle well this spring, a request for space for summer undergrad research, news that a student hold has been cleared so that a summer course can be created, a spreadsheet of my department's fiscal year-to-date spending for end-of-year planning, calls from upper administration for work on student outcomes assessment, merit pay evaluations, and year-end faculty evaluation letters, and several dozen miscellaneous messages. That doesn't count mailing-list mail, both professional and personal, which is procmailed into separate boxes. Such is the life of a department head.

I also received from a student a link to a negative review of git. I am considering using distributed VCS in my agile course, with git and Mercurial at the top of the list. Do you have a suggestion? Mercurial has been around longer, I think, and there are some good tutorials on it. My students need to get up to speed relatively quickly with the basic use cases of whatever tool we use, for a team of ten doing XP-style development.

After two days of class, I feel pretty good about the prospects of this group of ten students. They seem willing to collaborate and eager to learn. We are taking to heart the agile value "Responding to change over following a plan", with a change to Ruby as our working language and (git | Mercurial) as our VCS. This means I get to be agile, too, as I prepare some new course materials built around Ruby, Test::Unit, and so on. I'll be looking into RubyMine as an IDE with refactoring support. Do you have any experience using it? Let me know.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Teaching and Learning

April 29, 2010 8:47 PM

Turning My Thoughts to Agile Software Development

April came and went in a flurry. Now begins a busy time of transition. Today was the last session of my programming languages course. This semester taught me a few new things, which I hope to catalog and consider soon.

Ordinarily the next teaching I do after programming languages is the compiler course that follows. I will be teaching that course, in the fall, as we seem to have attracted a healthy enrollment. But my next teaching assignments will be novelty and part-novelty all in one. I am teaching Agile Software Development in our May term, which runs May 10-June 4. This is a novelty for me in several ways. In all my years on the faculty, I have never taught summer school (!), and I have certainly never taught a 3-credit course in only four weeks. I expect the compressed schedule to create an intensity and focus unlike a regular course, but I fear that it will be hard to reflect much as we keep peddling every day for two hours. Full speed ahead!

The course is only part novelty because I have taught Agile Software Development twice before, in regular semesters. I'm also quite in tune with the agile values, principles, and practices. Still, seven years is an eon in the software world, so much has changed since my last offerings in 2003 and prior. Tools such as testing frameworks have evolved, changed outright, or sprung up new. Scrum, lean, and kanban have become major topics of discussion even as the original practices of XP remain the foundation of most agile teams. Languages have faded and surged. There is a lot of new for me in this old course.

The compressed schedule offers opportunities I have not had before when teaching a development course. Class will meet two hours every business day for four weeks. Students will be immersed in this course. Most will be working in the afternoons, but few will be taking a second course. This allows us to engage the material and our projects with an intensity we can't often muster. (I'll also have to be careful to pace the course so that we don't wear ourselves out, which seems a danger. This is a chance for me and the class to practice one of XP's bedrock practices, sustainable pace!)

The class will be small, only a dozen or so, which also offers interesting possibilities for our project. The best way to learn new practices is to use them, and with the class meeting for nearly eleven hours a week we have a chance to dig in and use tools and practice the practices for extended periods, as a group. The chance to pair program and work with a story board has never been so vivid for one of my classes.

I hope that we are able to craft a course and project that help us bypass some of the flaws with typical course projects. Certainly, we will be collocated more frequently and for longer stretches than in my department's usual project course, and we will be together enough to learn to work as a team. There shouldn't be the constant context switching between courses that students face during the academic year. Whether we can manage close interaction with a customer depends a lot on the availability of others and on the project we end up pursuing.

We do face many of the same challenges as my software engineering course last fall. Our curriculum creates a Babel of several programming languages. Students will come to the course with a cavernous range of experience, skills, and maturity. That gap offers a good test of how pair programming collective code ownership, and apprenticeship can help build and share culture and values. The lack of a common tongue is simply a challenge, though, if we hope to deliver software of value in four short weeks.

The next eleven days will find me busy, busy, busy, thinking about my course, organizing readings, and preparing a project and tools.

I am curious to hear what you think:

  • Which ideas, tools, and practices from the agile world ten years ago remain essential today?
  • What changes in the last decade fundamentally changed what we mean by agile development?
  • What readings -- especially accessible primary sources available on the web -- do you recommend?


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 23, 2010 4:52 PM

Who Are You Really Testing?

Earlier this week, I read How to Fail as a Teacher. Given some of the things I have been dealing with as department head lately, one particular method stuck out:

Always view test scores as about the failure or success of the student and not as a tool to evaluate your teaching

Everyone failed one part of the test? The whole class much be dummies. Never assume that you might have taught it wrong. That sort of thinking could lead to new teaching methods, reteaching material or worst case differentiated instruction.

I know that when first starting, I required a big attitude adjustment (country music warning) in this area. First, I was doing a great job, and the students didn't get it. Then, I realized I was doing a less than perfect job, but the students just had to adapt. Finally, I now know that I am doing a less than perfect job, and my job is to find ways to help students get it. Even when I am doing a pretty good job, it is still my job is to find ways to help students get it.

I'm not perfect yet at being imperfect yet, but at least I am aware. It seems that some profs never quite get there.

Lately, we've been thinking a lot about outcomes assessment for our academic programs again. How do we get better as a department? How do we get better as individual instructors, at doing what we hope to do as educators? Writing down the outcomes is step one. Observing results and taking them seriously is step two. Feeding back what we learn into our personal behavior, our courses, and our programs is the third.

If your students aren't reaching the goal, then maybe you need to teach them differently. If most of your students never reach the goal, or if some of your students regularly do not, then almost certainly you need to do something different.

Oh, and the article's first two ways to fail are effective ones, too.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 22, 2010 8:36 PM

At Some Point, You Gotta Know Stuff

A couple of days ago, someone tweeted a link to Are you one of the 10% of programmers who can write a binary search?, which revisits a passage by Jon Bentley from twenty-five years ago. Bentley observed back than that 90% of professional programmers were unable to produce a correct version of binary search, even with a couple of hours to work. I'm guessing that most people who read Bentley's article put themselves in the elite 10%.

Mike Taylor, the blogger behind The Reinvigorated Programmer, challenged his readers. Write your best version of binary search and report the results: is it correct or not? One of his conditions was that you were not allowed to run tests and fix your code. You had to make it run correctly the first time.

Writing a binary search is a great little exercise, one I solve every time I teach a data structures course and occasionally in courses like CS1, algorithms, and any programming language- or style-specific course. So I picked up the gauntlet.

You can see my solution in a comment on the entry, along with a sheepish admission: I inadvertently cheated, because I didn't read the rules ahead of time! (My students are surely snickering.) I wrote my procedure in five minutes. The first test case I ran pointed out a bug in my stop condition, (>= lower upper). I thought for a minute or so, changed the condition to (= lower (- upper 1)), and the function passed all my tests.

In a sense, I cheated the intent of Bentley's original challenge in another way. One of the errors he found in many professional developers' solution was an overflow when computing the midpoint of the array's range. The solution that popped into my mind immediately, (lower + upper)/2, fails when lower + upper exceeds the size of the variable used to store the intermediate sum. I wrote my solution in Scheme, which handle bignums transparently. My algorithm would fail in any language that doesn't. And to be honest, I did not even consider the overflow issue; having last read Bentley's article many years ago, I had forgotten about that problem altogether! This is yet another good reason to re-read Bentley occasionally -- and to use languages that do heavy lifting for you.

But.

One early commenter on Taylor's article said that the no-tests rule took away some of my best tools and his usual way of working. Even if he could go back to basics, working in an unfamiliar probably made him less comfortable and less likely to produce a good solution. He concluded that, for this reason, a challenge with a no-tests rule is not a good test of whether someone is a good programmer.

As a programmer who prefers an agile style, I felt the same way. Running that first test, chosen to encounter a specific possibility, did exactly what I had designed it to do: expose a flaw in my code. It focused my attention on a problem area and caused me to re-examine not only the stopping condition but also the code that changed the values of lower and upper. After that test, I had better code and more confidence that my code was correct. I ran more tests designed to examine all of the cases I knew of at the time.

As someone who prides himself in his programming-fu, though, I appreciated the challenge of trying to design a perfect piece of code in one go: pass or fail.

This is a conundrum to me. It is similar to a comment that my students often make about the unrealistic conditions of coding on an exam. For most exams, students are away from their keyboards, their IDEs, their testing tools. Those are big losses to them, not only in the coding support they provide but also in the psychological support they provide.

The instructor usually sees things differently. Under such conditions, students are also away from Google and from the buddies who may or may not be writing most of their code in the lab. To the instructor, This nakedness is a gain. "Show me what you can do."

Collaboration, scrapheap programming, and search engines are all wonderful things for software developers and other creators. But at some point, you gotta know stuff. You want to know stuff. Otherwise you are doomed to copy and paste, to having to look up the interface to basic functions, and to being able to solve only those problems Google has cached the answers to. (The size of that set is growing at an alarming rate.)

So, I am of two minds. I agree with the commenter who expressed concern about the challenge rules. (He posted good code, if I recall correctly.) I also think that it's useful to challenge ourselves regularly to solve problems with nothing but our own two hands and the cleverness we have developed through practice. Resourcefulness is an important trait for a programmer to possess, but so are cleverness and meticulousness.

Oh, and this was the favorite among the ones I read:

I fail. ... I bring shame to professional programmers everywhere.

Fear not, fellow traveler. However well we delude ourselves about living in a Garrison Keillor world, we are all in the same boat.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

April 13, 2010 9:03 PM

Unexpected Encounters with Knowledge

In response to a question from Francesco Cirillo, Ward Cunningham says:

Reflecting on your career choices is there anything you would have done differently?

I'm pretty happy with my career, though I never did enough calculus homework if I think how much calculus has influenced how I think in terms of small units of change.

Calculus came up in my programming languages course today, while we were talking about maps, finite functions, and discrete math. Our K-12 system aims students toward calculus, and when they arrive at the university they often end up taking a calculus course if they haven't yet already. Many CS students struggle in calc. They can't help but notice the dearth of applications of calculus in most of their CS courses and naturally ask, "Why are we required to take this class?"

This is a common discussion even among faculty. I can argue both sides of the case, though I admit to believing that understanding the calculus at some level is an essential part of being an educated person, just as understanding the literary and historical context in which one grows and lives is essential. The calculus is one of the crowning achievements of the Enlightenment and helped to usher in the scientific advances that define in large part the world in which we all live today. But Cunningham's reflection encourages us to think about calculus in a different light.

Notice that Cunningham does not talk about direct application of the calculus in any program he wrote. The only program he mentions specifically is WyCash, a portfolio management system. Nor does he talk in an abstract academic way about intellectual achievement and the Age of Reason.

He says instead that the calculus's notion of small units of change has affected the way he thinks. I'm confident that he is thinking here not only of agile software development, with its short iterations and rapid feedback cycle, but also of test-driven development, patterns, and wiki. One can accumulate value in the smallest of the slices. If one accumulates enough of them, then over time the value one amasses can be the area under quite a large curve of action.

This is an indirect application of knowledge. Ward either did enough calculus homework or paid enough attention in class that he was able to understand one of the central ideas underlying the discipline. That understanding probably lay fallow in his mind until he began to see how the idea was recurring in his programming, in his community building, and in his approach to software development. He was then able to think about the implications of the idea in his current work and learn from what we know about the calculus.

I am a fan of Ward's in large part because of his wonderful ability to make such connections. It is hard to anticipate this kind of connection across domains. That's why it's so important to be educated widely and to take seriously ideas from all corners of human accomplishment. Even calc class.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 30, 2010 8:57 PM

Matthias Felleisen Wins the Karl Karlstrom Award

Early today, Shriram Krishnamurthi announced on the PLT Scheme mailing list that Matthias Felleisen had won the Karl Karlstrom Outstanding Educator Award. The ACM presents this award annually

to an outstanding educator who is .. recognized for advancing new teaching methodologies, or effecting new curriculum development or expansion in Computer Science and Engineering; or making a significant contribution to the educational mission of the ACM.

A short blurb in the official announcement touts Felleisen "for his visionary and long-term contributions to K-12 outreach programs, innovative textbooks, and pedagogically motivated software". Krishnamurthi, in his message to the community borne out of Felleisen's leadership and hard work, said it nicely:

Everyone on this list has been touched, directly or indirectly, by Matthias Felleisen's intellectual leadership: from his books to his software to his overarching vision and singular execution, as well as his demand that the rest of us live up to his extraordinary standards.

Matthias Felleisen

As an occasional Scheme programmer and a teacher of programmers, I have been touched by Felleisen's work regularly over the last 20 years. I first read "The Little Lisper" long before I knew Matthias, and it changed how I approached programming with inductive data types. I assign "The Little Schemer" as the only textbook for my programming languages course, which introduces and uses functional programming. I have always felt as if I could write my own materials to teach functional programming and the languages content of the course, but "The Little Schemer" is a tour de force that I want my students to read. Of course, we also use Dr. Scheme and all of its tools for writing Scheme programs, though we barely scratch the surface of what it offers in our one-semester course.

We have never used Felleisen's book "How to Design Programs" in our introductory courses, but I consider its careful approach to teaching software design one of the most important intro CS innovations of the last twenty years. Back in the mid-1990s, when my department was making one of its frequent changes to the first-year curriculum, I called Matthias to ask his advice. Even after he learned that we were not likely to adopt his curriculum, he chatted me for a while and offered me pedagogical advice and even strategic advice my making a case for a curriculum based in a principle outside any given language.

That's one of the ironic things about Felleisen's contribution: He is most closely associated with Scheme and tools built in and for Scheme, but his TeachScheme! project is explicitly not about Scheme. (The "!" is even pronounced "not", a programming pun using the standard C meaning of the symbol.) TeachScheme! uses Scheme as a tool for creating languages targeted at novices who progress through levels of understanding and complexity. Just today in class, I talked with my students about Scheme's mindset of bringing to users of a language the same power available to language creators. This makes it an ideal intellectual tool for implementing Felleisen's curriculum, even as its relative lack of popularity has almost certainly hindered adoption of the curriculum more widely.

As my department has begun to reach out to engage K-12 students and teachers, I have come to appreciate just how impressive the TeachScheme! outreach effort is. This sort of engagement requires not only a zeal for the content but also sustained labor. Felleisen has sustained both his zeal and his hard work, all the while building a really impressive group of graduate students and community supporters. The grad students all seem to earn their degrees, move on as faculty to other schools, and yet remain a part of the effort.

Closer to my own work, I continue to think about the design recipe, which is the backbone of the HtDP curriculum. I remain convinced that this idea is compatible with the notion of elementary patterns, and that the design recipe can be integrated with a pattern language of novice programs harmoniously to create an even more powerful model for teaching new programmers how to design programs.

As Krishnamurthi wrote to the PLT Scheme developer and user communities, Felleisen's energy and ideas have enriched my work. I'm happy to see the ACM honor him for his efforts.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 24, 2010 7:42 PM

SIGCSE Day 2 -- Al Aho on Teaching Compiler Construction

[A transcript of the SIGCSE 2010 conference: Table of Contents]

Early last year, I wrote a blog entry about using idea's from Al Aho's article, Teaching the Compilers Course, in the most recent offering of my course. When I saw that Aho was speaking at SIGCSE, I knew I had to go. As Rich Pattis told me in the hallway after the talk, when you get a chance to hear certain people speak, you do. Aho is one of those guys. (For me, so is Pattis.)

The talk was originally scheduled for Thursday, but persistent fog over southeast Wisconsin kept several people from arriving at the conference on time, including Aho. So the talk was rescheduled for Fri. I still had to see it, of course, so I skipped the attention-grabbing "If you ___, you might be a computational thinker".

Aho's talk covered much of the same ground as his inroads paper, which gave me the luxury of being able to listen more closely to his stories and elaborations than to the details. The talk did a nice job of putting the compiler course into its historical context and tried to explain why we might well teach a course very different -- yet in many ways similar -- to the course we taught forty, twenty-five, or even ten years ago.

He opened with lists of the top ten programming languages in 1970 and 2010. There was no overlap, which introduced Aho's first big point: the landscape of programming languages has changes in a big way since the beginning of our discipline, and there have been corresponding changes in the landscape of compilers. The dimensions of change are many: the number of languages, the diversity of languages, the number and kinds of applications we write. The growth in number and diversity applies not only to the programming languages we use, which are the source language to a compiler, but also to the target machines and the target languages produced by compilers.

From Aho's perspective, one of the most consequential changes in compiler construction has been the rise of massive compiler collections such as gcc and LLVM. In most environments, writing a compiler is no longer a matter of "writing a program" as much a software engineering exercise: work with a large existing system, and add a new front end or back end.

So, what should we teach? Syntax and semantics are fairly well settled as matter of theory. We can thus devote time to the less mathematical parts of the job, such as the art of writing grammars. Aho noted that in the 2000s, parsing natural languages is mostly a statistical process, not a grammatical one, thanks to massive databases of text and easy search. I wonder if parsing programming languages will ever move in this direction... What would that mean in terms of freer grammar, greater productivity, or confusion?

With the availability of modern tools, Aho advocates an agile "grow a language" approach. Using lex and yacc, students can quickly produce a compiler in approximately 20 lines of code. Due to the nature of syntax-directed translation, which is closely related to structural recursion, we can add new productions to a grammar with relative ease. This enables us to start small, to experiment with different ideas.

The Dragon book circa 2010 adds many new topics to its previous editions. It just keeps getting thicker! It covers much more material, both breadth and depth, than can be covered in the typical course, even with graduate students. This gives instructors lots of leeway in selecting a subset around which to build a course. The second edition already covers too much material for my undergrad course, and without enough of the examples that many students need these day. We end up selecting such a small subset of the material that the price of the book is too high for the number of pages we actually used.

The meat of the talk matched the meat of his paper: the compiler course he teaches these days. Here are a few tidbits.

On the Design of the Course

  • Aho claims that, through all the years, every team has delivered a working system. He attributes this to experience teaching the course and the support they provide students.
  • Each semester, he brings in at least one language designer in as a guest speaker, someone like Stroustrup or Gosling. I's love to do this but don't have quite the pull, connections, or geographical advantage of Aho. I'll have to be creative, as I was the last time I taught agile software development and arranged a phone conference with Ken Auer.
  • Students in the course become experts in one language: the one they create. They become much more knowledgable in several others: the languages they to to write, build, and test their compiler.

On System Development

  • Aho sizes each student project at 3,000-6,000 LOC. He uses Boehm's model to derive a team size of 5, which fits nicely with his belief that 5 is the ideal team size.
  • Every team member must produce at least 500 lines of code on the project. I have never had an explicit rule about this in the past, but experience in my last two courses with team projects tells me that I should.
  • Aho lets teams choose their own technology, so that they can in the way that makes them most comfortable. One serendipitous side effect of this choice is that requires him to stay current with what's going on in the world.
  • He also allows teams to build interpreters for complex languages, rather than full-blown compilers. He feels that the details of assembly language get in the way of other important lessons. (I have not made that leap yet.)

On Language Design

  • One technique he uses to scope the project is to require students to identify an essential core of their language along with a list of extra features that they will implement if time permits. In 15 years, he says, no team has ever delivered an extra feature. That surprises me.
  • In order to get students past the utopian dream of a perfect language, he requires each team to write two or three programs in their language to solve representative problems in the language's domain. This makes me think of test-first design -- but of the language, not the program!
  • Aho believes that students come to appreciate our current languages more after designing a language and grappling with the friction between dreams and reality. I think this lesson generalizes to most forms of design and creation.

I am still thinking about how to allow students to design their own language and still have the time and energy to produce a working system in one semester. Perhaps I could become more involved early in the design process, something Aho and his suite of teaching assistants can do, or even lead the design conversation.

On Project Management

  • "A little bit of process goes a long way" toward successful delivery and robust software. The key is finding the proper balance between too much process, which stifles developers, and too little, which paralyzes them.
  • Aho has experimented with different mechanisms for organizing teams and selecting team leaders. Over time, he has found it best to let teams self-organize. This matches my experience as well, as long as I keep an eye out for obviously bad configurations.
  • Aho devotes one lecture to project management, which I need to do again myself. Covering more content is a siren that scuttles more student learning than it buoys.

~~~~

Aho peppered his talk with several reminiscences. He told a short story about lex and how it was extended with regular expressions from egrep by Eric Schmidt, Google CEO. Schmidt worked for Aho as a summer intern. "He was the best intern I ever had." Another interesting tale recounted one of his doctoral student's effort to build a compiler for a quantum computer. It was interesting, yes, but I need to learn more about quantum computing to really appreciate it!

My favorite story of the day was about awk, one of Unix's great little languages. Aho and his colleagues Weinberger and Kernighan wrote awk for their own simple data manipulation tasks. They figured they'd use it to write throwaway programs of 2-5 lines each. In that context, you can build a certain kind of language and be happy. But as Aho said, "A lot of the world is data processing." One day, a colleague came in to his office, quite perturbed at a bug he had found. This colleague had written a 10,000-line awk program to do computer-aided design. (If you have written any awk, you know just how fantabulous this feat is.) In a context where 10K-line programs are conceivable, you want a very different sort of language!

The awk team fixed the bug, but this time they "did it right". First, they built a regression test suite. (Agile Sighting 1: continuous testing.) Second, they created a new rule. To propose a new language feature for awk, you had to produce regression tests for it first. (Agile Sighting 2: test-first development.) Aho has built this lesson into his compiler course. Students must write their compiler test-first and instrument their build environments to ensure that the tests are run "all of the time". (Agile Sighting 3: continuous integration.)

An added feature of Aho's talk over his paper was three short presentations from members of a student team that produced PixelPower, a language which extends C to work with a particular graphics library. They shared some of the valuable insights from their project experience:

  • They designed their language to have big overlap with C. This way, they had an existing compiler that they understood well and could extend.
  • The team leader decided to focus the team, not try to make everyone happy. This is a huge lesson to learn as soon as you can, one the students in my last compiler course learned perhaps a bit too late. "Getting things done," Aho's students said, "is more important than getting along."
  • The team kept detailed notes of all their discussions and all their decisions. Documentation of process is in many ways much more important than documentation of code, which should be able to speak for itself. My latest team used a wiki for this purpose, which was a good idea they had early in the semester. If anything, they learned that they should have used it more frequently and more extensively.

One final note to close this long report. Aho had this to say about the success of his course:

If you make something a little better each semester, after a while it is pretty good. Through no fault of my own this course is very good now.

I think Aho's course is good precisely because he adopted this attitude about its design and implementation. This attitude serves us well when designing and implementing software, too: Many iterations. Lots of feedback. Collective ownership of the work product.

An hour well spent.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 22, 2010 4:35 PM

SIGCSE Day 2 -- Reimagining the First Course

[A transcript of the SIGCSE 2010 conference: Table of Contents]

The title of this panel looked awfully interesting, so I headed off to it despite not knowing just what it was about. It didn't occur to me until I saw the set of speakers at the table that it would be about an AP course! As I have written before, I don't deal much with AP, though most of my SIGCSE colleagues do. This session turned out to be a good way to spend ninety minutes, both as a window into some of the conference buzz and as a way to see what may be coming down the road in K-12 computer science in a few years.

What's up? A large and influential committee of folks from high schools, universities, and groups such as the ACM and NSF are designing a new course. It is intended as an alternative to the traditional CS1 course, not as a replacement. Rather than starting with programming or mathematics as the foundation, of the the course, the committee is first identifying a set of principles of computing and then designing a course to teach these principles. Panel leader Owen Astrachan said that the are engineering a course, given the national scale of the project and the complexity of creating something that works at lots of schools and for lots of students.

Later, I hope to discuss the seven big ideas and the seven essential practices of computational thinking that serve as the foundation for this course, but for now you should read them for yourself. At first blush, they seem like a reasonable effort to delineate what computing means in the world and thus what high school graduates these days should know about the technology that circumscribes their lives. They emphasize creativity, innovation, and connections across disciplines, all of which can be lost when we first teach students a programming language and whip out "Hello, World" and the Towers of Hanoi.

Universities have to be involved in the design and promotion of this new course because it is intended for advanced placement, and that mean that it must earn college credit. Why does the AP angle matter? Right now, because it is the only high school CS course that counts at most universities. It turns out that advanced placement into a major matters less to many parents and HS students than the fact that the course carries university credit. Placement is the #1 reason that HS students take AP courses, but university credit is not too far behind.

For this reason, any new high school CS course that does not offer college credit will be hard to sell to any K-12 school district. (This is especially true in a context where even the existing AP CS is taught in only 7% of our high schools.) That's not too high a hurdle. At the university level, it is much easier to have an AP course approved for university or even major elective credit than it is to have a course approved for advanced placement in the major. So the panel encouraged university profs in the audience to do what they can at their institutions to prepare the way.

Someone on the panel may have mentioned the possibility of having a principles-based CS AP course count as a general education course. At my school we were successful a couple of years ago at having a CS course on simulation and modeling added as one of the courses that satisfies the "quantitative reasoning" requirement in our Liberal Arts Core. I wonder how successful we could be at having a course like the new course under development count for LAC credit. Given the current climate around our core, I doubt we could get a high school AP course to count, because it would not be a part of the shared experience our students have at the university.

The most surprising part of this panel was the vibe in the room. Proposals such as this one that tinker with the introductory course in CS usually draw a fair amount of skepticism and outright opposition. This one did not. The crowd seemed quite accepting, even when the panel turned its message into one of advocacy. They encouraged audience members to become advocates for this course and for AP CS more generally at their schools. They asked us not to tear down these efforts, but to join in and help make the course better. Finally, they asked us to join the College Board, the CS Teachers Association, and the ACM in presenting a united front to our universities, high schools, and state governments about the importance and role of computing in the K-12 curriculum. The audience seemed as if it was already on board.

In closing, there were two memorable quotes from the panel. First, Jan Cuny, currently a program officer for CISE at the National Science Foundation, addressed concern that all the talk these days about the "STEM disciplines" often leaves computing out of the explicit discussion:

There is a C in STEM. Nothing will happen in the S, the T, the E, or the M without the C.

I've been telling everyone at my university this for the last several years, and most are open to the broadening of the term when they are confronted with this truth.

Second, the front-runner for syllogism of the year is this gem from Owen Astrachan. Someone in the audience asked, "If this new course is not CS1, then is it CS0?" (CS0 is a common moniker for university courses taken by non-majors before they dive into the CS major-focused CS1 course.) Thus spake Owen:


     This course comes before CS1.
     0 is the only number less than 1.
Therefore, this course is CS0.

This was only half of Owen's answer, but it was the half that made me laugh.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 11, 2010 8:33 PM

SIGCSE Day One -- The Most Influential CS Ed Papers

[A transcript of the SIGCSE 2010 conference: Table of Contents]

This panel aimed to start the discussion of how we might identify which CS education papers have had the greatest influence on our practice of CS education. Each panelist produced a short list of candidates and also suggested criteria and principles that the community might use over time. Mike Clancy made explicit the idea that we should consider both papers that affect how we teach and papers that affect what we teach.

This is an interesting= process that most areas of CS eventually approach. A few years ago, OOPSLA began selecting a paper from the OOPSLA conference ten years prior that had had the most influence on OO theory or practice. That turns out to be a nice starting criterion for selection: wait ten years so that we have some perspective on a body of work and an opportunity to gather data on the effects of the work. Most people seem to think that ten years is long enough to wait.

You can see the list of papers, books, and websites offered by the panelists on this page. The most impassioned proposal was Eric Roberts's tale of how much Rich Pattis's Karel the Robot affects Stanford's intro programming classes to this day, over thirty years after Rich first created Karel.

I was glad to see several papers by Eliot Soloway and his students on the list. Early in my career, Soloway had a big effect on how I thought about novice programmers, design, and programming patterns. My patterns work was also influenced strongly by Linn and Clancy's The Case for Case Studies of Programming Problems, though I do not think I have capitalized on that work as much as I could have.

Mark Guzdial based his presentation on just this idea: our discipline in general does not fully capitalize on great work that has come before. So he decided to nominate the most important papers, not the most influential. What papers should we be using to improve our theory and practice?

I know Anderson's cognitive tutors work well, from the mid-1990s when I was preparing to move my AI research toward intelligent tutoring systems. The depth and breadth of that work is amazing.

Some of my favorite papers showed up as runners-up on various lists, including Gerald Weinberg's classic The Psychology of Programming. But I was especially thrilled when, in the post-panel discussion, Max Hailperin suggested Robert Floyd's Turing Award lecture, The Paradigms of Programming. I think this is one of the all-time great papers in CS history, with so many important ideas presented with such clarity. And, yes, I'm a fan.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 11, 2010 7:47 PM

SIGCSE Day One -- What Should Everyone Know about Computation?

[A transcript of the SIGCSE 2010 conference: Table of Contents]

This afternoon session was a nice follow up to the morning session, though it had a focus beyond the interaction of computation and the sciences: What should every liberally-educated college graduate know about computation? This almost surely is not the course we start CS majors with, or even the course we might teach scientists who will apply computing in a technical fashion. We hope that every student graduates with an understanding of certain ideas from history, literature, math, and science. What about computation?

Michael Goldweber made an even broader analogy in his introduction. In the 1800s, knowledge about farming was pervasive throughout society, even among non-farmers. This was important for people, even city folk, to understand the world in which they lived. Just as agriculture once dominated our culture, so does technology now. To understand the world in which they live, people these days need to understand computation.

Ultimately, I found this session disappointing. We heard a devil's advocate argument against teaching any sort of "computer literacy"; a proposal that we teach all students what amounts to an applied, hand-waving algorithms course; and a course that teaches abstraction in contexts that connects with students. There was nothing wrong with these talks -- they were all entertaining enough -- but they didn't shed much new light on what is a difficult question to answer.

Henry Walker did say a few things that resonated with me. One, he reminded us that there is a difference between learning about science and doing science. We need to be careful to design courses that do one of these well. Two, he tried to explain why computer science is the right discipline for teaching problem solving as a liberal art, such as how a computer program can illustrate the consequences specific choices, the interaction of effects, and especially the precision with which we must use language to describe processes in a computer program. Walker was the most explicit of the panelists in treating programming as fundamental to what we offer the world.

In a way unlike many other disciplines, writing programs can affect how we think in other areas. A member of the audience pointed out CS also fundamentally changes other disciplines by creating new methodologies that are unlike anything that had been practical before. His example was the way in which Google processes and translates language. Big data and parallel processing have turned the world of linguistics away from Chomskian approach and toward statistical models of understanding and generating language.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 11, 2010 7:04 PM

SIGCSE Day One -- Computation and The Sciences

[A transcript of the SIGCSE 2010 conference: Table of Contents]

I chose this session over a paper session on Compilers and Languages. I can always check those papers out in the proceedings to see if they offer any ideas I can borrow. This session connects back to my interests in the role of computing across all disciplines, and especially to my previous attendance at the SECANT workshops on CS and science in 2007 and 2008. I'm eager to hear about how other schools are integrating CS with other disciplines, doing data-intensive computing with their students, and helping teachers integrate CS into their work. Two of these talks fit this bill.

In then first, Ali Erkan talked about the need to prepare today's to work on large projects that span several disciplines. This is more difficult than simply teaching programming languages, data structures, and algorithms. It requires students to have disciplinary expertise beyond CS, the ability to do "systems thinking", and the ability to translate problems and solutions across the cultural boundaries of the disciplines. A first step is to have students work on problems that are bigger than the students of any single discipline can solve. ( Astrachan's Law marches on!)

Erkin then described an experiment at Ithaca College in which four courses run as parallel threads against a common data set of satellite imagery: ecology, CS, thermodynamics, and calculus. Students from any course can consult students in the other courses for explanations from those disciplines. Computer science students in a data structures course use the data not only to solve the problem but also to illustrate ideas, such as memory usage of depth-first and breadth-first searches of a grid of pixels. They can also apply more advanced ideas, such as data analysis techniques to smooth curves and generate 3D graphs.

I took away two neat points from this talk. The first was a link to The Cryosphere Today, a wonderful source of data on arctic and antarctic ice coverage for students to work with. The second was a reminder that writing programs to solve a problem or illustrate a data set helps students to understand the findings of other sciences. Raw data become real for them in writing and running their code.

In the second paper, Craig Struble described a three-day workshop for introducing computer science to high school science teachers. Struble and his colleagues at Marquette offered the workshop primarily for high school science teachers in southeast Wisconsin, building on the ideas described in A Novel Approach to K-12 CS Education: Linking Mathematics and Computer Science. The workshop had four kinds of sessions:

  • tools: science, simulation, probability, Python, and VPython
  • content: mathematics, physics, chemistry, and biology
  • outreach: computing careers, lesson planning
  • fun: CS unplugged activities, meals and other personal interaction with the HS teachers

This presentation echoed some of what we have been doing here. Wisconsin K-12 education presents the same challenge that we face in Iowa: there are very few CS courses in the middle- or high schools. The folks at Marquette decided to attack the challenge in the same way we have: introduce CS and the nebulous "computational thinking" through K-12 science classes. We are planning to offer a workshop for middle- and high school teachers. We are willing to reach an audience wider than science teachers and so will be showing teachers how to use Scratch to create simulations, to illustrate math concepts, and even to tell stories.

I am also wary of one of the things the Marquette group learned in follow-up with the teachers who attended their workshop. Most teachers likes it and learned a lot, but many are not able to incorporate what they learn into their classes. Some face time constraints from a prescribed curriculum, while others are limited by miscellaneous initiatives external to their curriculum that are foisted on them by their school. That is a serious concern for us as we try to help teachers do cool things with CS that change how they teach their usual material.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 10, 2010 9:15 PM

SIGCSE DAY 0 -- New Educators Roundtable

[A transcript of the SIGCSE 2010 conference: Table of Contents]

I spent most of my afternoon at the New Educators Roundtable, as on of the veteran faculty seeding and guiding conversation. The veteran team came from big schools like Carnegie-Mellon, Stanford, and Washington, medium-sized schools like Creighton and UNI, and a small liberal-arts college, Union. The new educators themselves came from this range of schools and more (one teaches at Milwaukee Area Technical College up the street) and were otherwise an even more mixed lot, ranging from undergrads to university instructors with several years experience. The one thing they all have in common is a remarkable passion for teaching. They inspired this old-timer with their energy for 100-hour work weeks and their desire to do great things in the classroom.

The conversation ranged as far and as wide as the participants' backgrounds and experience. leaders Julie Zelenski and Dave Reed wisely planned not to impose much structure on the workshop, instead letting the the participants drive the conversation with questions and observations. We elders interjected with stories and observations of our own -- and even a question or two of our own, which let the young ones know that they'll still be learning the craft of teaching when they get old like us.

I did end up with one new item for my list of books to read: Mindset, by Carol Dweck. It came up in a discussion about how to help students overcome an unhealthy focus on grades, which eventually turned into a discussion about student motivation and attitudes about what it takes to succeed in CS. As one elder summarized nicely, "It's not about talent. It's about work."


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 10, 2010 8:35 PM

SIGCSE DAY 0 -- Media Computation Workshop

[A transcript of the SIGCSE 2010 conference: Table of Contents]

I headed to SIGCSE a day early this year in order to participate in a couple of workshops. The first draw was Mark Guzdial's and Barbara Ericson's workshop using media computation to teach introductory computing to both CS majors and non-majors. I have long been a fan of this work but have never seen them describe it. This seemed like a great chance to learn a little from first principles and also to hear about recent developments in the media comp community.

Because I taught a CS1 course in Java, using media comp four years ago, I was targeted by other media comp old-timers as a media comp old-timer. They decided, with Mark's blessing, to run a parallel morning session with the goal of pooling experience and producing a resource of value to the community.

When the moment came for the old-timers to break out on their own, I packed up my laptop, stood to leave -- and stopped. I felt like spending the morning as a beginner. This was not an entirely irrational decision. First, while I have done Java media comp, I have never worked with the original Python materials or the JES programming environment students use to do media comp in Python. Second, I wanted to see Mark present the material -- material he has spent years developing and for which he has great passion. I love to watch master teachers in action. Third, I wanted to play with code!

Throughout the morning, I diddled in JES with Python code to manipulate images, doing things I've done many times in Java. It was great fun. Along the way, I picked up a few nice quotes, ideas, and background facts:

  • Mark: "For liberal arts students and business students, the computer is not a tool of calculation but a tool of communication.

  • The media comp data structures book is built largely on explaining the technology needed to create the wildebeest stampede in The Lion King. (Check out this analysis, which contains a description of the scene in the section "Building the Perfect Wildebeests".)

  • We saw code that creates a grayscale version of an image attuned to human perception. The value used for each color in a pixel weights its original values as 0.299*red + 0.587*blue + 0.114*green. This formula reinforces the idea that there are an infinite number of weightings we can use to create grayscale. There are, of course, only a finite number of grayscale versions of an image, though that number is very large: 256 raised to a power equal to the number of pixels in the image.

  • After creating several Python methods that modify an image, non-majors eventually bump into the need to return a value, often a new image. Deciding when a function should return a value can be tough, especially for non-CS folks. Mark uses this rule of thumb to get them started: "If you make an image in the method, return it."

  • Mark and Barb use Making of "The Matrix" to take the idea of chromakey beyond the example everyone seems to know, TV weather forecasters.

  • Using mirroring to "fix" a flawed picture leads to a really interesting liberal arts discussion: How do you know when a picture is fake? This is a concept that every person needs to understand these days, and understanding the computations that can modify an image enables us to understand the issues at a much deeper level.

  • Mark showed an idea proposed to him by students at one of his workshops for minority high school boys: when negating an image, change the usual 255 upper bound to something else, say, 180. This forces many of the resulting values to 0 and behaves like a strange posterize function!

I also learned about Susan Menzel's work at Indiana University to port media computation to Scheme. This is the second such project I've heard of, after Sam Rebelsky's work at Grinnell College connecting Scheme to Gimp.

Late in the morning, we moved on to sound. Mark demonstrated some wonderful tools for playing with and looking at sound. He whistled, sang, hummed, and played various instruments into his laptop's microphone, and using their MediaTools (written in Squeak) we could see the different mixes of tones available in the different sounds. These simple viewers enable us to see that different instruments produce their own patterns of sounds. As a relative illiterate in music, I only today understood how it is that different musical instruments can produce sounds of such varied character.

The best quote of the audio portion of the morning was, "Your ears are all about logarithms." Note systems with halving and doubling of frequencies across sets of notes is not an artifact of culture but an artifact of how the human ear works!

This was an all-day workshop, but I also had a role as a sage elder at the New Educators Roundtable in the afternoon, so I had to duck out for a few hours beginning with lunch. I missed out on several cool presentations, including advanced image processing ideas such as steganography and embossing, but did get back in time to hear how people are now using media computation to teach data structures ideas such as linked lists and graphs. Even with a gap in the day, this workshop was a lot of fun, and valuable as we consider expanding my department's efforts to teach computing to humanities students.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 10, 2010 7:40 PM

Notes on SIGCSE 2010: Table of Contents

The set of entries cataloged here records some of my thoughts and experiences at SIGCSE 2010, in Milwaukee, Wisconsin, March 10-13. I'll update it as I post new essays about the conference.

Primary entries:

Ancillary entries:


Posted by Eugene Wallingford | Permalink | Categories: Computing, Running, Software Development, Teaching and Learning

March 05, 2010 9:21 PM

Mastering Tools and Improving Process

Today, a student told me that he doesn't copy and paste code. If he wants to reuse code verbatim, he requires himself to type it from scratch, character by character. This way, he forces himself to confront the real cost of duplication right away. This may motivate him to refactor as soon as he can, or to reconsider copying the code at all and write something new. In any case, he has paid a price for copying and so has to take it seriously.

The human mind is wonderfully creative! I'm not sure I could make this my practice (I use duplication tactically), but it solves a very real problem and helps to make him an even better programmer. When our tools make it too easy to do something that can harm us -- such as copy and paste with wild abandon, no thought of the future pain it will cause us -- a different process can restore some balance to the world.

The interplay between tools and process came to mind as I read Clive Thompson's Garry Kasparov, cyborg. this afternoon. Last month, I read the same New York Review of Books essay by chess grandmaster Garry Kasparov, The Chess Master and the Computer, that prompted Thompson's essay. When I read Kasparov, I was drawn in by his analysis of what it takes for a human to succeed, as contrasted to what makes computers good at chess:

The moment I became the youngest world chess champion in history at the age of twenty-two in 1985, I began receiving endless questions about the secret of my success and the nature of my talent. ... I soon realized that my answers were disappointing. I didn't eat anything special. I worked hard because my mother had taught me to. My memory was good, but hardly photographic. ...

Garry Kasparov

Kasparov understood that, talent or no talent, success was a function of working and learning:

There is little doubt that different people are blessed with different amounts of cognitive gifts such as long-term memory and the visuospatial skills chess players are said to employ. One of the reasons chess is an "unparalleled laboratory" and a "unique nexus" is that it demands high performance from so many of the brain's functions. Where so many of these investigations fail on a practical level is by not recognizing the importance of the process of learning and playing chess. The ability to work hard for days on end without losing focus is a talent. The ability to keep absorbing new information after many hours of study is a talent. Programming yourself by analyzing your decision-making outcomes and processes can improve results much the way that a smarter chess algorithm will play better than another running on the same computer. We might not be able to change our hardware, but we can definitely upgrade our software.

"Programming yourself" and "upgrading our software" -- what a great way to describe how it is that so many people succeed by working hard to change what they know and what they do.

While I focused on the individual element in Kasparov's story, Thompson focused on the social side: how we can "program" a system larger than a single player? He relates one of Kasparov's stories, about a chess competition in which humans were allowed to use computers to augment their analysis. Several groups of strong grandmasters entered the competition, some using several computers at the same time. Thompson then quotes this passage from Kasparov:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and "coaching" their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

Thompson sees this "algorithm" as an insight into how to succeed in a world that consists increasingly of people and machines working together:

[S]erious rewards accrue to those who figure out the best way to use thought-enhancing software. ... The process matters as much as the software itself.

I see these two stories -- Kasparov the individual laboring long and hard to become great, and "weak human + machine + better process" conquering all -- as complements to one another, and related back to my student's decision not to copy and paste code. We succeed by mastering our tools and by caring about our work processes enough to make them better in whatever ways we can.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 02, 2010 7:05 PM

Working on Advice for New Faculty

SIGCSE 2010 logo

Julie Zelenski and Dave Reed have invited me to serve as a "sage elder" at the New Educators Roundtable, a pre-conference workshop at SIGCSE 2010. The roundtable is "designed to mentor college and university faculty who are new to teaching". This must surely be a mistake! I may well be an elder after all these years stalking the front of a classroom, but sage? Ha. In so many ways, I feel like a beginner every time I to the front of a class.

Experience does teach us lessons, though, so even if I still have a lot to learn and incorporate into how I teach, I probably do have some lessons I can share with people who are just getting started. If nothing else, I can talk with new faculty about some of the mistakes I have made. Each person must must live his or her own experiences, but sometimes we can plant a seed in people's minds that will germinate later, when the time is right for each person. (That sounds a lot like all teaching, actually.)

The website for the workshop will ultimately contain information from the workshop sessions themselves. To start, it will have short biographies of the elders. Actually, "biography" is a bit too formal a name for it. The header on the draft web page currently says "Bio / blurb / career highlights / anecdotes / historical fiction". Here is the retrospective falsification of my own career that I submitted:

In the fourth grade, my favorite teacher of all time told me that I would never be a teacher; I was too impatient with others. For that and many other reasons, I never expected to become a teacher when I grew up. As a graduate student doing AI at Michigan State, I was assigned to teach a few courses. I did fine, I think, but even then I planned to move into industry as a researcher and developer. Somehow, I ended up at UNI, a medium-sized public "teaching university". I've been teaching classes here since 1992. My largest section ever contained 53 students; the smallest, 4.

That is pretty much true, at least as I remember it. I included the size of my largest and smallest sections ever because many new faculty teach at big universities and will face massive CS1 and CS2 sections of several hundred students at a time. The advice I have to offer may not be as helpful in that context, so I want the people who attend the workshop to know my context. Several of my co-panelists will be able to speak more directly to those attendees. On the flip side, I have taught a long list of different courses over the years, so I can connect my experiences with many different content areas and types of course.

We elders were also asked to submit a list of "things you wish you had known" when we started teaching, as a way to jump start our thinking, as well as that of prospective attendees. Here is a list that I brainstormed:

Be honest. Students value honesty.

Even the best students will fall down occasionally. When they do, it doesn't mean there is something wrong with them, or with you.

Give feedback on assignments promptly.

The previous advice is a specific example of something more general: Most students don't handle uncertainty well. Even students who "get it" might think they do not.

Setting draconian standards and policies does more harm to your learning environment than it buys you.

Instead, set reasonable, firm, and challenging standards. Students appreciate that they are expected to accomplish something meaningful.

A lecture or classroom activity is only as good as what it helps your students do.

Each of these could use some elaboration ("Um, you thought being dishonest with students was a good thing?"), but as a start they reflect some of what I've learned.

As I prepare further for the workshop, I plan to read through the archives of the Teaching and Learning category of this blog. There are plenty of things I have learned and forgotten over the years. I hope that I wrote a few of those things down...

I think being on the New Educators' Roundtable might be as valuable for this elder as it is for the new educators. (I repeat: That sounds a lot like all teaching, actually.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 23, 2010 6:34 PM

Strength Later, Weakness Now

One of the things I've always liked about Scheme is that the procedures I write are indistinguishable from the primitive procedures of the language. All procedures are applied prefix and named using the same binding mechanism. This similarity extends beyond simple procedure definitions to other features such as variable arity and special forms, which aren't procedures at all.

All this makes Scheme an especially handy tool for creating domain-specific languages. I can create a whole new language for the domain I'm working in, say, language processing or financial accounting, simply by defining the procedures and forms that embody the domain's concepts. When I write programs using these procedures, they mix seamlessly with Scheme's primitive procedures and forms to create a nearly friction-less writing -- and reading -- experience.

I've always touted this feature of the language to my students, who usually learn Scheme as a tool for writing interpreters and other simple language processors in our programming languages course. However, over the last couple of offerings of the course, I am beginning to realize that this strength is a weakness for many beginners.

At the outset of their journey into functional programming, students have so many things to think about (language syntax, new programming principles, idioms to accomplish tasks they understand well, and so on) that a lack of friction may actually hurt them. They have trouble finding the boundary between the language Scheme and the language we build on top of it. For example, when we first began to implement recursive procedures, I gave students a procedure sequence:

    (define sequence
      (lambda (start finish)
        (if (> start finish)
            '()
            (cons start (sequence (+ start 1) finish)))))

as a simple example and to use for testing code that processes lists of numbers. For weeks afterwards, I had students e-mailing me because they wrote code that referred to sequence: "Why am I getting errors?" Well, because it's not a primitive and you don't define it. "But we use it in class all the time." Well, I define it each time I need it. "Oh, I guess I forgot." This sequence has re-played itself many times already this semester, with several other pieces of code.

I suppose you could say the students ought to be more disciplined in their definition and use of code, or that I ought to do a better job of teaching them how to write and reuse code, or simply that the students need better memories. One or all of these may be true, but I think there is more happening here. A language with no friction between primitives and user-defined code places one more burden on students who are already juggling a lot of new ideas in their minds.

As students become more experienced with the language and the new style of programming, they have a better chance to appreciate the value of seamless layers of code as they grow a program. They begin to notice that the lack of friction helps them, as they don't have to slow down to handle special cases, or to change how a piece of code works when they decide to add an abstraction between the code and its clients. Whereas before the lack of friction slowed them down while they pondered boundaries and looked up primitives, now it helps them move faster.

This phenomenon is not peculiar to functional programming or Scheme. I think it is also true of OOP and Java. Back when Java was first becoming part of CS 1 at many schools, many of my colleagues objected to use the use of home-grown packages and libraries. The primary argument was that students would not be learning to write "real Java" (which is so wrong!) and that their code would not be as portable (which is true). In retrospect, I think a more compelling case can be made that the use of homegrown packages might interfere with students cognitively as they learn the language and the boundaries around it. There are elements of this in both of their objections, but I now think of it as the crux of the issue.

This phenomenon is also not a necessary condition to learning functional programming or Scheme. A number of schools use Scheme in their first-year courses and do just fine. Perhaps instructors at these schools have figured out ways to avoid this problem entirely, or perhaps they rely on some disciplines to help students work around it. I may need to learn something from them.

I have noticed my students having this difficulty the last two times we've offered this course and not nearly as much before, so perhaps our students are changing. On the other hand, maybe this reflects an evolution in my own recognition and understanding of the issue. In either case, my job is pretty much the same: find ways to help students succeed. All suggestions are welcome.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 22, 2010 6:56 PM

I'll Do Homework, But Only for a Grade

In the locker room one morning last week, I overheard two students talking about their course work. One of the guys eventually got himself pretty worked up while talking about one professor, who apparently gives tough exams, and exclaimed, "We worked two and a half hours on that homework, and he didn't even grade it!"

Yesterday, I was sitting with my daughters while they did some school work. One of them casually commented, "We all stopped putting too much effort into Teacher Smith's homework when we figured out s/he never grades it."

I know my daughter's situation up close and so know what she means. She tends to go well beyond the call of duty on her assignments, in large part because she is in search of a perfect grade. With time an exceedingly scarce and valuable resource, she faces an optimization problem. It turns out she can put in less effort on her homework than she ordinarily does and still do fine on her test. With no prospect of a higher grade from putting more time into the assignment to pull her along, she is willing to economize a bit and spend her time elsewhere.

Maybe that's just what the college student meant when I overheard him that morning. Perhaps he is actually overinvesting in his homework relative to its value for learning, because he seeks a higher grade on the homework component of the course. That's not the impression I got from my unintentional eavesdropping, though. I left the locker room thinking that he sees value in doing the homework only if it is graded, only if it contributes to his course grade.

This is the impression too many college students give their instructors. If it doesn't "count", why do it?

Maybe I was like that in college, too. I know that grades were important to me, and as double-major trying to graduate in four years after spending much of my freshman year majoring in something else, I was taking a heavy class load. Time was at premium. Who has time or energy to do things that don't count?

Even if I did not understand then, I know now that the practice itself is an invaluable part of how I learned. Without lots of practice writing code, we don't even learn the surface details of our language, such as syntax and idiom, let alone reach a deep understanding of solving problems. In the more practical terms expressed by the student in the locker room, without lots of practice, most every exam will seem too long, look to be difficult, and seem to be graded harshly. That prof of his has found a way to get the student to invest time in learning. What a gift!

We cannot let the professor off the hook, though. If s/he tells the class that the assignment will be graded, or even simply gives students the impression that it "counts for something", then not to grade the assignment is a deception. Such a tactic is justified only in exceptional circumstances, and not only moral grounds. As Teacher Smith has surely learned by now, students are smart enough not to fall for a lie too many times before they direct their energies elsewhere.

In general, though, homework is a gift: a chance to learn under controlled conditions. I'm pretty sure that students don't see it this way. This reminds me a conversation I had with my colleague Mark Jacobson a couple of weeks ago. We were discussing the relative abundance and paucity of a grateful attitude among faculty in general. He recalled that, in his study of the martial arts, he had encountered two words for "thank you". One, suki, from the Japanese martial arts, means to see events in our lives as opportunity or gift. Another, sugohasameeda, comes from Korean Tae Kwon Do and is used to say, "Thank you for the workout".

Suki and sugohasameeda are related. One expresses suki when things do not go the way we wish, such as when we have a flat tire or when a work assignment doesn't match or desires. One expresses sugohasameeda in gratitude to one's teacher for the challenging and painful work that make us grow, such as workouts that demand our all. I see elements of both in the homework we are assigned. Sugohasameeda seems to be spot-on with homework, yet suki comes into play, too, in cases such as the instructor going counter to our expectations and not grading an assignment.

I do not find myself in the role of student as much these days, but I can see so many ways that I can improve my own sense of gratefulness. I seem to live sugohasameeda more naturally these days, though incompletely. I am far too often lacking in suki. My daily life would be more peaceful and whole if I could recognize the opportunity to grow through undesired events with gratitude.

One final recollection. Soon after taking my current job, I met an older gentleman who had worked in a factory for 30+ years. He asked where I worked, and when I said, "I teach at the university", he said, "That beats workin' for a livin'". My first reaction was akin to amused indignation. He obviously didn't know anything about what my job was like.

Later I realized that there was a yin to that yang. I am grateful to have a career in which I can do so many cool things, explore ideas whenever they call to me, and work with students who learn and help me to learn -- to do things I love every day. So, yeah, I guess my job does beat "workin' for a livin'".

I just wish more students would take their homework seriously.

~~~~

My colleague Mark also managed to connect his ideas about gratitude from the martial arts to the 23rd Psalm of the Christian Bible. The green pastures to which it famously refers are not about having everything exactly as I want it, but seeing all things as they are -- as gift, as opportunity, as suki. I continue to learn from him.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 21, 2010 7:32 PM

Typos and Uncertainty

Last week, a student asked me why one of my examples in the programming assignment said this:

     > (insertion-sort > '(1 4 2 5 3 6 4 7))
     (1 2 3 4 4 5 6 7)

Shouldn't the answer be (7 6 5 4 4 3 2 1)? Or was there something that he didn't understand?

At first, I thought his question was a polite way of pointing out my typo, but as we talked it became clear that he felt some real uncertainty about the answer.

How? Surely it was obvious that sorting the list in descending order should produce the second list. This seemed all the more obvious because the previous example on the same page sorted the same input with < and had the correct output! What could he be thinking?

Sometimes, I ask myself such things rhetorically, out of wonder or frustration. Over the years, though, I have learned to take these questions seriously, because they are the key to understanding what's going on with my students.

In the Case of the Backward Bracket, I recognize a lesson I have learned before: Even the smallest error or inconsistency can create major doubt in the mind of a novice. Originally, I wrote "fragile novice", and it's true that some novices are more fragile than others. But to be a beginner is by its nature to be fragile. Our minds are still learning to see, so when they see something that is wrong we are willing to believe that something is wrong with with us.

Learning functional programming and Scheme puts my students in the position of facing problems they feel confident solving -- if only they could use their favorite programming style and language. Right now, though, they struggle with a new way of seeing, and this creates uncertainty for them. It makes them tentative, maybe even scared. They see my typo and wonder what it is they don't get.

This lesson means at least two things to me as a teacher. First, I need to be extra careful to weed out mistakes in what I tell, show, and ask them. I want them to be able to focus as much as possible on the necessary complexity in the problems, not on distractions that result from my fingerfehlers. Second, I need to keep my eyes open for moments when this kind of uncertainty and fear begin to dominate my students' minds, whether in class or working on their own. By recognizing the situation early enough and intervening, carefully, I may be able to help them stay on a productive path toward understanding.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 19, 2010 4:33 PM

Thoughts on How to Design

Smalltalk Best Practice Patterns is one of my favorite programming books. In it, Kent Beck records some of the design and implementation patterns that he has observed in Smalltalk systems over the years. What makes the book most valuable, though, is that most of its patterns apply beyond Smalltalk, to other object-oriented programming languages and even to non-OO languages. That's because it is really about how we think about and design our programs.

Kent Beck

Kent's latest design endeavor is what he calls the Responsive Design Project, and he reports some of this thinking so far in a recent blog entry. The entry includes a number of short patterns of design. These are not patterns that show up in designs, but patterns of thinking that help give rise to designs. Being hip deep in teaching functional design style to students whose experience is imperative programming, many of Kent's lessons hit home for me.

Inside or Outside. Change the interface or the implementation but not both at the same time.

Winnie the Pooh, a bear of very little brain

This is a classic that bears repeating. It's tempting to start making big changes to even a small piece of code, but whenever we conflate changes to interface and implementation, we risk creating more complexity than our small brains can manage.

Isolate Changes. Before making a change, isolate the area to be changed from the rest of the system so you can change an entire element at a time. For example, before changing a part of a procedure, extract the area to be changed into its own procedure. Make the change, then inline the changed sub-procedure if appropriate.

This one is beyond the ken of most intermediate-level students, so it doesn't show up in my courses often. When we confine a change to a small box, we control the complexity of the change and the range of its effect. This technique can even be used to control software evolution at a higher level.

Exploit Symmetries. Divide similar elements into identical parts and different parts.

Many beginning programmers find it counter-intuitive that the best way to eliminate duplicated code is to increase the level of duplication, maximizing the repetition to the point that it can be factored out in the cleanest and sharpest way. I have come to know this pattern well but sense that its value runs much deeper than the uses to which I've put it thus far.

Then there is the seemingly contradictory pair

Cultivate Confidence. Master your tools. Your feeling of mastery will improve your cognition.

and

Cultivate Humility. Try tools or techniques you aren't comfortable with. Being aware of your limitations will improve your effectiveness.

Of course, the practices themselves aren't contradictory at all, though the notion that one can be confident and humble at the same time might seem to be. But even that holds no contradiction, because it's all about the edge between mastery and learning. I often talk about these patterns in my programming classes, if only hope that a student who is already starting to sense the tension between hubris and humility will know that it's okay to walk the line.

Finally, my nominee for best new pattern name:

Both. Faced with design alternatives without a clear winner, do it every way. Just coding each alternative for an hour is more productive than arguing for days about which is better in theory, and a lot more satisfying.

You may have heard the adage, "Listen to your code", which is often attributed to Kent or to Ward Cunningham. This pattern goes one step beyond. Create the code that can tell you what you need to hear. Time spent talking about what code might do is often much less productive than simply writing the code and finding out directly.

Early in his essay, Kent expresses the lesson that summarizes much of what follows as

Our illusion of control over software design is a dangerous conceit best abandoned.

He says this lesson "disturbs and excites me". I guess I'm not too disturbed by this notion, because feeling out of control when working in a new domain or style or language has become routine for me. I often feel as if I'm stumbling around in the dark while a program grows into just what it needs to be, and then I see it. Then I feel like I'm in control. I knew where I was going all the time.

In my role as a teacher, this pattern holds great danger. It is easy after the fact to walk into a classroom and expound at length about a design or a program as if I understood it all along. Students sometimes think that they should feel in control all the time, too, and when they don't they become discouraged or scared. But the controlled re-telling of the story is a sham; I was no more in control while writing my program than they will be when they write theirs.

What excites me most about this pattern is that it lifts a burden from our backs that we usually didn't know we were carrying. Once we get it, we are able to move on to the real business of writing and learning, confident in the knowledge that we'll feel out of control much of the time.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 15, 2010 10:13 PM

Luck, Embracing Failure, and State of Mind

This morning, Kevlin Henney tweeted:

Being lucky is not generally a matter of luck RT @gregyoung: http://is.gd/8qdIk

That shortened URL points to an article called, Be Lucky: It's an Easy Skill to Learn. The author, psychologist Richard Wiseman, reports some of his findings after a decade studying people who consider themselves lucky or unlucky. Not surprisingly, one's state of mind has as much to do with perception of luck as any events in the observable world. He has identified three common threads that anyone can use to become luckier:

  • Trust your intuition.
  • Use variety and pseudorandom behavior to create opportunities for unexpected benefits.
  • See the positive in each event.

One of the things that struck me about this article was the connection of unlucky people to tension.

... unlucky people are generally much more tense than lucky people, and research has shown that anxiety disrupts people's ability to notice the unexpected.

Tension relates directly to all three of the above bullets. Tense people tend to overthink situations, looking to optimize some metric, and thus quash their gut instincts. They tend to seek routine as a way to minimize distraction and uncertainty, which cause them to miss opportunities. And their tension tends to cause them to see the negative in any event that does not match their desired optimum. Perhaps the key to luck is nothing more than relaxation!

When I think of times I feel unlucky -- and I must sheepishly admit that this happens all too often -- I can see the tension that underlies Wiseman's results. But for me this usually manifests itself as frustration. This thought, in turn, reminded me of a blog entry I wrote a year ago on embracing failure. In it, I considered Rich Pattis observation about how hard computer science must feel to beginners, because it is a discipline learned almost wholly by failure. Not just occasional failure, but a steady stream of failures ranging from syntax errors to misunderstanding complex abstractions. Succeeding in CS requires a certain mindset in which embraces, fights through, or otherwise copes with failure in a constructive way. Some of us embrace it with gusto, seeing failure as a challenge to surmount, not a comment on our value or skills.

I wonder now if there might be a connection between seeing oneself as lucky and embracing failure. Lucky people find positive in the negative events; successful programmers see valuable information in error messages and are empowered to succeed. Lucky people seek out variety and the opportunities it offers; successful programmers try out new techniques, patterns, and languages, not because they seek out failure but because they seek opportunities to learn. Lucky people respect their hunches; successful programmers have the hubris to believe they can see their way to a working program.

If relaxation is the key to removing tension, and removing tension is the key to being lucky, and being lucky is a lot like being a successful programmer, then perhaps the key to succeeding as a programmer is nothing more than relaxation! Yes, that's a stretch, but there is something there.

One last connection. There have been a couple of articles in the popular press recently about an increase in the prevalence of cheating, especially in CS courses. This has led to discussions of cheating in a number of places CS faculty hang out. I imagine there is a close connection between feeling frustrated and tense and feeling like one needs to cheat to succeed. If we can lower the level of tension in our classrooms by lowering the level of frustration, there may be a way for us to stem the growing tide of students cheating. The broader the audience we have in any given classroom, the harder this is to achieve. But we do have tools available to us, including having our students working in domains that give more feedback more visibly, sooner, and more frequently.

One of my favorite comments in all the on-line discussion of cheating in CS is Comment 1 to Mark Guzdial's blog entry, by Steve Tate:

About a decade ago I was chatting with some high school teachers when my university hosted a programming contest for high school kids. One teacher pointed out that her best CS students were those that also played either music or golf -- her theory was that they were used to tasks where you are really bad at first, but you persevere and overcome that. But you have to be able to accept that you'll really stink at it for a good long while.

This struck me as a neat way to make a connection between learning to program and learning music or sports. I had forgotten about the discussion of "meaningful failure" in my own entry... Tate explains the connection succinctly.

Whatever the connections among tension, fear of failure, cheating, and luck, we need to find ways to help students and novice developers learn how to take control of their own destiny -- even if it is only in helping them cultivate their own sense of good luck.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 30, 2010 6:02 PM

The Evolution of the Textbook

Over the last week, there has been a long thread on SIGCSE listserv about writing textbooks. Most interesting to me was Kim Bruce's note, "Thinking about electronic books". Kim noted Apple's announcement of the iPad, which comes with software for reading electronic books. Having written dead-tree books before, he wondered how the evolution of technology might help us to enhance our students' learning experience.

If we can provide books on a general-purpose computer, we have so many options available. Kim mentions one: replacing graphics with animations. Rather than seeing a static picture of the state of some computation, they could watch the computation unfold, with a direct connection to the code that produces it. This offers a huge improvement in the way that students can experience the ideas we want them to learn. You can see this difference in examples Kim posted of his printed textbook and his on-line lecture notes.

Right now, authors face a challenging practical obstacle: the lack of a standard platform. If a book requires features specific, say, to an iPad or to Windows, then its audience is limited. Even if it doesn't but a particular computer doesn't provide support for some near-standard technology, such as Flash on the Apple products, then users of those products are unable to access the book their devices. It would be nice to have an authoring system that runs across platforms, transparently, so that writers can focus on what they want to write, not on compatibility issues.

As Kim points out, we can accomplish some of this already on the web, writing for a browser. This isn't good enough, though. Reading long-ish documents at a desktop computer through a browser changes the reading experience in important ways. Our eyes -- and the rest of our bodies -- need something more.

With the evolution of handheld devices toward providing the full computational power we see on the desktop, our ability to write cross-platform books grows. The folks working on Squeak, Croquet, Sophie, and other spin-off technologies have this in mind. They are creating authoring systems that run across platforms and that rely less and less on underlying OS and application software for support.

As we think about how to expand the book-reading experience using new technologies, we can also see a devolution from the other side. Fifteen years ago, I spent a few years thinking about intelligent tutoring systems (ITS). My work on knowledge-based systems in domains such as engineering and business had begun to drift toward instruction. I hoped that we could use what we'd learned about knowledge representation and generic problem-solving patterns to build programs that could help people learn. These systems would encode knowledge from expert teachers in much the way that our earlier systems encoded knowledge from expert tax accountants, lawyers, and engineered.

Intelligent tutoring systems come at learning from the AI side of things, but the goal is the same as that of textbooks: to help people learn. AI promised something more dynamic than what we could accomplish on the printed page. I have not continued in that line of work, but I keep tabs on the ITS community to see what sort of progress they have been making. As with much of AI, the loftiest goals we had when we started are now grounded better in pragmatics, but the goal remains. I think Mark Guzdial has hit upon the key idea is his article Beat the book, not the teacher. The goal of AI systems should not be (at least immediately) to improve upon the perfomance of the best human teachers, or even to match it; the goal should be to improve upon the perfomance of the books we ask our students to read. This idea is the same one that Kim Bruce encourages us to consider.

As our technology evolves in the direction of reasonably compact mobile devices with the full computational power and high-fidelity displays, we have the ability to evolve how and what we write toward the dream of a dynabook. We should keep in mind that, with computation and computer programming, we are creating a new medium. Ultimately, how and what we write may not look all that much like a traditional book! They may be something new, something we haven't thought of yet. There is no reason to limit ourselves to producing the page-turning books that have served us so well for the last few centuries. That said, a great way to move forward is to try to evolve our books to see where our new technology can lead us, and to find out where we come up short.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 29, 2010 7:01 PM

Diverse Thinking, Narrative, Journalism, and Software

A friend sent me a link to a New York Times book review, Odysseus Engages in Spin, Heroically, by Michiko Kakutani. My friend and I both enjoy the intersection of different disciplines and people who cross boundaries. The article reviews "The Lost Books of the Odyssey", a recent novel Kakutani calls "a series of jazzy, post-modernist variations on 'The Odyssey'" and "an ingeniously Borgesian novel that's witty, playful, moving and tirelessly inventive". Were the book written by a classicist, we might simply add the book to our to-read list and move on, but it's not. Its author, Zachary Mason is a computer scientist specializing in artificial intelligence.

I'm always glad to see examples of fellow computer scientists with interests and accomplishments in the humanities. Just as humanists bring a fresh perspective when they come to computer science, so do computer scientists bring something different when they work in the humanities. Mason's background in AI could well contribute to how he approaches Odysseus's narrative. Writing programs that make it possible for computers to understand or tell stories causes the programmer to think differently about understanding and telling stories more generally. Perhaps this experience is what enabled Mason to "[pose] new questions to the reader about art and originality and the nature of storytelling".

Writing a program to do any task has the potential to teach us about that task at a deeper level. This is true of mundane tasks, for which we often find our algorithmic description is unintentionally ambiguous. (Over the last couple of weeks, I have experienced this while working with a colleague in California who is writing a program to implement a tie-breaking procedure for our university's basketball conference.) It is all the more true for natural human behaviors like telling stories.

In one of those unusual confluences of ideas, the Times book review came to me the same week that I read Peter Merholz's Why Design Thinking Won't Save You, which is about the value, even necessity, of bringing different kinds of people and thinking to bear on the tough problems we face. Merholz is reacting to a trend in the business world to turn to "design thinking" as an alternative to the spreadsheet-driven analytical thinking that has dominated the world for the last few decades. He argues that "the supposed dichotomy between 'business thinking' and 'design thinking' is foolish", that understanding real problems in the world requires a diversity of perspectives. I agree.

For me, Kakutani's and Merholz's articles intersected in a second way as I applied what they might say about how we build software. Kakutani explicitly connects author Mason's CS background to his consideration of narrative:

["Lost Books" is] a novel that makes us rethink the oral tradition of entertainment that thrived in Homer's day (and which, with its reliance upon familiar formulas, combined with elaboration and improvisation, could be said to resemble software development) ...

When I read Merholz's argument, I was drawn to an analogy with a different kind of writing, journalism:

Two of Adaptive Path's founders, Jesse James Garrett and Jeffrey Veen, were trained in journalism. And much of our company's success has been in utilizing journalistic approaches to gathering information, winnowing it down, finding the core narrative, and telling it concisely. So business can definitely benefit from such "journalism thinking."

So can software development. This passage reminded of a panel I sat on at OOPSLA several years ago, about the engineering metaphor in software development. The moderator of the panel asked folks in the audience to offer alternative metaphors for software, and Ward Cunningham suggested journalism. I don't recall all the connections he made, but they included working on tight deadlines, having work product reviewed by an editor, and highly stylized forms of writing. That metaphor struck me as interesting then, and I have since written about the relationship between software development and writing, for example here. I have also expressed reservations about engineering as a metaphor for building software, such as here and here.

I have long been coming to believe that we can learn a lot about how to build software better by studying intensely almost every other discipline, especially disciplines in which people make things -- even, say, maps! When students and their parents ask me to recommend minors and double majors that go well with computer science, I often mention the usual suspects but always make a pitch for broadening how we think, for studying something new, or studying intensely an area that really interests the students. Good will come from almost any discipline.

These days, I think that making software is like so many things and unlike them all. It's something new, and we are left to find our own way home. That is indeed part of the fun.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

January 22, 2010 9:23 PM

Calling C. S. Peirce

William Caputo channels the pragmatists:

These days, I believe the key difference between practice, value and principle (something much debated at one time in the XP community and elsewhere) is simply how likely we are to adjust them if things are going wrong for us (i.e., practices change a lot, principles rarely). But none should be immune from our consideration when our actions result in negative outcomes.

To the list of practice, value, and principle, pragmatists like Peirce, James, Dewey, and Meade would add knowledge. When we focus on their instrumental view of knowledge, it easy to forget one of the critical implications of the view: that knowledge is contingent on experience and context. What we call "knowledge" is not unchanging truth about the universe; it is only less likely to change in the face of new experience than other elements of our belief system.

Caputo reminds us to be humble when we work to help others to become better software developers. The old pragmatists would concur, whether in asking us to focus on behavior over belief or to be open to continual adaptation to our environment. This guidance applies to teaching more than just software development.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

January 11, 2010 8:41 PM

An Uncommon Language Gap

This semester, I am teaching my programming languages course, in which students and I program using Scheme.

Due to an unexpected but welcome uptick in enrollment across the department, I will also be team-teaching a 10-week course on Cobol. We are one of the few CS programs that still try to offer Cobol, and when our attempt this time resulted in a class large enough to run, we didn't want to cancel it.

01  SWITCHES.
    05  EOF-SWITCH              PIC  X          VALUE "F".
        88  AT-END-OF-FILE                      VALUE "T", "t".

It's been a long time since I have spanned two such different languages in the same semester. I'll have to resist the urge to implement curried paragraphs in Cobol, though try to replicate Cobol's Data Division magic in Scheme might be fun. It will certainly underscore just how different a couple of Cobol's features are from what students encounter in modern languages. It will be interesting to see how my time thinking about Cobol will affect what I say and do in Programming Languages.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 22, 2009 2:57 PM

Agile Themes: Things Students Say

And now even the grading is done. I enjoyed reading my students' answers to exam questions about software engineering, especially agile approaches. Their views were shaped in part by things I said in class, in part by things I asked them to read, and in part by their own experiences writing code. The last of these included a small team project in class, for which two teams adopted many XP practices.

Many people in the software industry have come to think of agile development as implying an incomplete specification. A couple of students inferred this as well, and so came to view one of the weaknesses of agile approaches as a high risk that the team will go in circles or, worse yet, produce an incomplete or otherwise unacceptable system because they did not spend enough time analyzing the problem. Perhaps I can be more careful in how I introduce requirements in the context of agile development.

One exam question asked students to describe a key relationship between refactoring and testing. Several students responded with a variation of "Clean code is easier to test." I am not sure whether this was simply a guess, or this is what they think. It's certainly true that clean code is easier to test, and for teams practicing more traditional software engineering techniques this may be an important reason to refactor. For teams that are writing tests first or even using tests to drive development, this is not quite as important the answer I was hoping for: After you refactor, you need to be able to run the test suite to ensure that you have not broken any features.

Another person wrote an answer that was similar to the one in the preceding paragraph, but I read it as potentially more interesting: "Sometimes you need to refactor in order to test a feature well." Perhaps this answer was meant in the same way as "clean code is easier to test". It could mean something else, though, related to an idea I mentioned last week, design for testability. In XP, refactoring and test-first programming work together to generate the system's design. The tests drive additions to the design, and refactoring ensures that the additions become part of a coherent whole. Sometimes, you need to refactor in order to test well a feature that you want to add in the next iteration. If this is what the student meant, then I think he or she picked up on something subtle that we didn't discuss explicitly in class.

When asked what the hardest part of their project had been and what the team had done in response to the challenge, one student said, "We had difficulty writing code, so we looked for ways to break the story into parts." Hurray! I think this team got then idea.

A question near the end of the exam asked students about Fred Brooks's No Silver Bullet paper. They read this paper early in the semester to get some perspective on software engineering and wrote a short essay about their thoughts on it. After having worked on a team project for ten weeks, I asked them to revisit the paper's themes in light of their experience. One student wrote, "A good programmer is the best solution to engineering software."

A lot of the teams seem to have come to a common understanding that their design and programming skills and those of their teammates were often the biggest impediment to writing the software they envisioned. If they take nothing from this course than the desire and willingness to work hard to become better designers and programmers, then we will have achieved an outcome more important than anything try to measure with a test.

I agree with Brooks and these students. Good programmers are the best solution to engineering software. The trick for us in computer science is how to grow, build, or find great designers and programmers.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 21, 2009 7:37 AM

All Is Done But the Grading

The semester is over. All that remains for us professors is to grade final exams and turn in course grades. All that remains for some students is waiting anxiously for those grades to hit the record -- or ask for e-mail notice as soon as the grade is computed.

Writing the software engineering final exam reminded me how much my ideas about exam-giving have changed over my many years as a teacher. Consider this passage from Lewis Carroll's Alice:

'Give your evidence,' said the King; 'and don't be nervous, or I'll have you executed on the spot.'

Students often feel like the person hearing the king's command. I think back in the old days I secretly felt like the king, and thought that was good. I don't feel that way any more. I still demand a lot, especially in my technical courses, but my mindset is less the king's and more Charles Colton's:

Examinations are formidable even to the best prepared, for the greatest fool may ask more than the wisest man can answer.

I am much more careful about the questions I ask on exams now. If I have any doubts about a question -- what it means, what students might take it to mean, how it relates to what we have done class -- I try to re-write it in some way. Students taking the exam are working under the constraints of time and nerves, so it's important that questions be as clear and as straightforward as possible.

Surely I fail in this at times, but at least now I am aware of the problem and try to solve it. In my early years as a professor, I was probably a bit too cavalier. I figured that the grades would all work themselves out in the end. They always did, but I was forgetting about something else: the way students experienced the exams. Those feeling color how students feel about the course, and even the course's topic, along the way.

I've also changed a bit in how I think about grades. I have never thought of myself as "giving" grades to students; I merely assigned the grades that students earned. But I was pretty well fixed in how I approached the earning of grades. Do the homework, do the assignments, take the tests -- earn the grade. I've always been willing to make course-level adjustments in a due date or in how I would grade an assignment, in response to what is happening with me and the students. I feel more flexible these days in making individual adjustments, too, though I can't point think of many specific examples to serve as evidence that my feeling is warranted.

I do still have some quirks that set my grading apart from many of my colleague's. Assignments are due when they are due. (I've written about that before.) I do not prepare study guides for the class. (That seems like the students' job.) And I don't create gratuitous extra-credit work at the end of the term for students who simply didn't do the regularly-assigned work earlier in the semester. (That hardly seems fair to the students who did the work.) But my mentality is different. I have always tried to encourage and reassure students. Now I try to pay as much attention to the signals I send implicitly as to my explicit behavior. Again, I know I don't always succeed in this, but my students are probably better off when I'm trying than when I'm oblivious.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 16, 2009 9:51 PM

A Typical Day Thinking Backward

Today I was thinking retrospectively about themes as I wrote the final exam for my software engineering course. It occurred to me that one essential theme of the course needs to be design for testability. We talked about issues such as coupling and cohesion, design heuristics and patterns for making loosely connected, flexible code, and Model-View-Controller as an architectural style. Yet much of the code they wrote was coupled too tightly to test conveniently and thoroughly. I need to help them connect the dots from the sometimes abstract notions of design to both maintenance and testing. This will help me bring some coherence to the design unit of the course.

I am beginning to think that much of the value in this course comes from helping students to see the relationships among the so-called stages of the software life cycle: design and testing, specification and testing, design and maintenance, and so on. Each stage is straightforward enough on its own. We can use our time to consider how they interact in practice, how each helps and hinders the others. Talking about relationships also provides a natural way to discuss feedback in the life cycle and to explore how the agile approaches capitalize on the relationships. (Test-driven development is the ultimate in design for testability, of course. Every bit of code is provoked by a test!)

I realize that these aren't heady revelations. Most of you probably already know this stuff. It's amazing that I can teach a course on writing software for an entire semester, after so many years of writing software myself, and only come to see such a basic idea clearly after having made a first pass. I guess I'm slow. Fortunately, I do seem eventually to learn.

Last night I read a few words that I needed to see. They come from Elizabeth Gilbert, on writing:

Quit your complaining. It's not the world's fault that you wanted to be an artist. It's not the world's job to enjoy the films you make, and it's certainly not the world's obligation to pay for your dreams. Nobody wants to hear it. Steal a camera if you have to, but stop whining and get back to work.

Plug 'programmer' or 'teacher' in for 'artist', and 'laptop' for 'camera', and this advice can help me out on most days. Not because I feel unappreciated, but because I feel pulled in so many directions away from what I really want to do: prepare better courses and -- on far, far too many days -- from writing code. Like Gilbert, I need to repeat those words to myself whenever I start to feel resentful of my other duties. No one cares. I need to find ways to get back to work.

Gilbert closes her essay with more uplifting advice that also feels right:

My suggestion is that you start with the love and then work very hard and try to let go of the results.

If you love to program, or to teach, or to run, you do it. Remember the love that got you into the game, and let it keep you there. The rest will follow.

Oh, and if you haven't seen Gilbert's TED talk, walk back to her home page and watch. It's a good one.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 14, 2009 8:45 PM

Initial Thoughts on Teaching Software Engineering

We have entered finals week, which for me mean grading team projects and writing and grading a final exam. As I think back over the term, a few things stand out.

Analysis. This element of software engineering was a challenge for me. The previous instructor was an expert in gathering requirements and writing specs, but I did not bring that expertise to the course. I need to gather better material for these topics and think about better ways to get help students experience them.

Design and implementation. The middle part of my course disappointed me. These are my favorite parts of making software and the areas about which I know the most, both theoretically and practically. Unfortunately, I never found a coherent approach for introducing the key ideas or giving students deep experiences with them. In the end, my coverage felt too clinical: software architectures, design patterns, refactoring... just ideas. I need to design more convincing exercises to give students a feel for doing these; one big team project isn't enough. Too much of the standard software engineering material here boils down to "how to make UML diagrams". Blech.

Testing. Somewhat to my surprise, I enjoyed this material as much as anything in the course. I think now that I should invert the usual order of the course and teach testing first. This isn't all that crazy, given the relationship between specs and tests, and it would set us up to talk about test-driven design and refactoring in much different ways. The funny thing is that my recently-retired software engineering colleague, who has taught this course for years, said this idea out loud first, with no prompting from me!

More generally, I can think of two ways in which I could improve the course. First, I sublimated my desire to teach an agile-driven course far too much. This being my first time to teach the course, I didn't want to fall victim to my own biases too quickly. The result was a course that felt too artificial at times. With a semester under my belt, I'll be more comfortable next time weaving agile threads throughout the course more naturally.

Second, I really disappointed myself on the tool front. One of my personal big goals for the course was to be sure that students gained valuable experience with build tools, version control, automated testing tools, and a few other genres. Integrating tool usage into a course like this takes either a fair amount of preparation time up front, or a lot more time during the semester. I don't have as much in-semester time as I'd like, and in retrospect I don't think I banked enough up-front time to make up for that. I will do better next time.

One thing I think would make the course work better is to use an open-source software project or two as a running example in class throughout the semester. An existing project would provide a concrete way to introduce both tools and metrics, and a new project would provide a concrete way to talk about most of the abstract concepts and the creative phases of making software.

All this said, I do think that the current version of the course gave students a chance to see what software engineering is and what doing it entails. I hope we did a good enough job to have made their time well-spent.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 20, 2009 3:35 PM

Learning Through Crisis

... an author never does more damage to his readers
than when he hides a difficulty.
-- Évariste Galois

Like many of the aphorisms we quote for guidance, this one is true, but not quite true if taken with the wrong sense of its words or at the wrong scale.

First, there are different senses of the word "difficulty". Some difficulties are incidental, and some are essential. An author should indeed hide incidental difficulties; they only get in the way. However, the author must not hide essential difficulty. Part of the author's job is to help the readers overcome the difficulty.

Second, we need to consider the scale of revelation and hiding. Authors who expose difficulties too soon only confuse their readers. Part of the author's job is to prepare the reader, to explain, inspire, and lead readers from their initial state into a state where they are ready to face the difficulty. At that moment, the author is ready to bring the difficulty out into the open. The readers are ready.

What if the reader has already uncovered the difficulty before meeting the author? In that time, the author must not try to hide it, to fool his readers. He must attack it head on -- perhaps with the same deliberation in explaining, inspiring, and leading, but without artifice. It is this sense in which Galois has nailed a universal truth.

If we replace "author" with "teacher" in this discussion we still have truths. The teacher's job is to eliminate incidental difficulties while exposing essential ones. Yet the teacher must be deliberate, too, and prepare the reader, the student, to overcome the difficulty. Indeed, a large part of the teacher's craft is the judicious use of simplification and unfolding, leading students to a deeper understanding.

Sometimes, we teachers can use difficulty to our advantage. As I discussed recently, the brain often learns best when it it encounters its own limitations. Some say that is the only way we learn, but I don't think I believe the notion when taken to this extreme. But I think that difficulty is often the teacher's best source of leverage. Confront students with difficulty, and then help them to find resolution.

Ben Blum-Smith expresses a similar viewpoint in his recent nugget on teaching students to do proofs in mathematics. He launches his essay with remarks by Paul Lockhart, whose essay I discussed last summer. Blum-Smith's teaching nugget is this:

The impulse toward rigorous proof comes about when your intuition fails you. If your intuition is never given a chance to fail you, it's hard to see the point of proof.

This is just as true for us as we learn to create programs as it is when we learn to create proofs. If our intuition and our current toolbox never fail us, it's hard to see the point of learning a new tool -- especially one that is difficult to learn.

Blum-Smith then quotes Lockhart:

Rigorous formal proof only becomes important when there is a crisis -- when you discover that your imaginary objects behave in a counterintuitive way; when there is a paradox of some kind.

This quote doesn't inspire cool thoughts in me the way so many other passages in Lockhart's paper do, but one word stands way out on this reading: crisis. It inspires Blum-Smith as well:

... what happens is that when kids reach a point in their mathematical education where they are asked to prove things, they find
  • that they have no idea how to accomplish what is being asked of them, and
  • that they don't really get why they're being asked to do it in the first place.

The way out of this is to give them a crisis. We need to give them problems where the obvious pattern is not the real pattern. What you see is not the whole story! Then, there is a reason to prove something.

We need to give our programming students problems in which the obvious solution, the solution that flows naturally from their fingers onto the keyboards, doesn't feel right, or maybe even doesn't work at all. There is more to the story; there is reason to learn something new.

Teachers who know a lot and can present useful knowledge to students can be quite successful, and every teacher really needs to be able to play this role sometime. But that is not enough, especially in a world where increasingly knowledge is a plentiful commodity. Great teachers have to know how to create in the minds of their students a crisis: a circumstance in which they doubt what they know just enough to spur the hard work needed to learn.

A good writer can do this in print, but I think that this is a competitive advantage available to classroom teachers: they operate in a more visceral environment, in which one can create safe and reliably effective crises in their students minds. If face-to-face university courses with domain experts are to thrive in the new, connected world, it will be because they are able to exploit this advantage.

~~~~

Postscript: Galois, the mathematician quoted at the top of this article, was born on October 25. That was the date of one of my latest confrontations with difficulty. Let me assure you: You can run, but you cannot hide!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

November 15, 2009 8:02 PM

Knowledge Arbitrage

A couple of weeks back, Brian Foote tweeted:

Ward Cunningham: Pure Knowledge arbitrageurs will no longer gain by hoarding as knowledge increasingly becomes a plentiful commodity #oopsla

This reminds me of a "quought" of the day that I read a couple of years ago. Paraphrased, it asked marketers: What will you do when all of your competitors know all of the same things you do? Ward's message broadens the implication from marketers to any playing field on which knowledge drives success. If everyone has access to the same knowledge, how do you distinguish yourself? Your product? The future looks a bit more imposing when no one starts with any particular advantage in knowledge.

Ward's own contributions to the world -- the wiki and extreme programming among them -- give us a hint as to what this new future might look like. Hoarding is not the answer. Sharing and building together might be.

The history of the internet and the web tells us at the result of collaboration and open knowledge may well be a net win for all of us over a world in which knowledge is hoarded and exploited for gain in controlled bursts.

Part of the ideal of the academy has always been the creation and sharing of knowledge. But increasingly its business model has been exposed as depending on the sort of knowledge arbitrage that Ward warns against. Universities now compete in a world of knowledge more plentiful and open than ever before. What can they do when all of their customers have access to much of the same knowledge that they hope to disseminate? Taking a cue from Ward, universities probably need to be thinking hard about how they share knowledge, how they help students, professors, and industry build knowledge together, and how they add value in their unique way through academic inquiry.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

November 13, 2009 2:18 PM

Learning via Solutions to our Limitations

Yesterday I introduced refactoring in my software engineering course. Near the beginning of my code demo, I got sidetracked a bit when I mentioned that I would be using JUnit to run some automated tests. We have not talked about testing yet, automated or otherwise, and I thought that refactoring might be a good way to show its value.

One student wondered why he should go to the trouble; why not just write a few lines of code to do his own testing? My initial response turned too quickly to the idea of automation, which seemed natural given the context of refactoring. Automating tests is essential when we are working in a tight cycle of code-test-refactor-test. This wasn't all that persuasive to the student, who had not seen us refactor yet. Fortunately, another student, who has used testing frameworks at work, jumped in to point out the real flaw in what the first student had proposed: interspersing test code and production code. I think that was more persuasive to the class, and we moved on.

That got me to thinking about a different way to introduce both testing frameworks and refactoring next time. The key pedagogical idea is to focus on students' current experience and why they need something new. Necessity gives birth not only to invention but also to the desire to learn.

Somedays, I think the web is magic. This popped into newsfeed when I refreshed this morning:

whenever possible, introduce new skills and new knowledge as the solution to the limitations of old skills and old knowledge

Meyer, who teaches HS math, has a couple of images contrasting the typical approach to lesson planning (introduce concept, pay "brief homage to workers who use it", work sample problems) to an approach based on the limitations of old skills:

  1. summarize briefly relevant prior skills
  2. show a "sample problem that renders those skills pretty well useless"
  3. describe the new skill

I like to teach design patterns using a more active version of this approach:

  1. give the students a problem to solve, preferably one that looks like a good fit for their current skill set
  2. as a group, explore the weaknesses in their solutions or the difficulties they had creating them
  3. introduce a pattern that balances the forces in this problem, and the discuss the more general context in which it applies

I need to remember to use this strategy with more of the new skills and techniques. It's hard to do this in the small for all techniques, but when I can tie the new idea to an error students make or a difficulty they have, I usually have better success. (My favorite success story with this approach was helping students to learn selection patterns -- ways to use if statements -- in CS1 back in the mid-1990s.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

October 30, 2009 4:31 PM

Writing to Learn, Book-Style

I know all about the idea of "writing to learn". It is one of the most valuable aspects of this blog for me. When I first got into academia, though, I was surprised to find how many books in the software world are written by people who are far from experts on the topic. Over the years, I have met several serial authors who pick a topic in conjunction with their publishers and go. Some of these folks write books that are successful and useful to people. Still the idea has always seemed odd.

In the last few months, I've seen several articles in which authors talk about how they set out to write a book on a topic they didn't know well or even much at all. Last summer, Alex Payne wrote this about writing the tapir book:

I took on the book in part to develop a mastery of Scala, and I've looked forward to learning something new every time I sit down to write, week after week. Though I understand more of the language than I did when I started, I still don't feel that I'm on the level of folks like David Pollak, Jorge Ortiz, Daniel Spiewak, and the rest of the Scala gurus who dove into the language well before Dean or I. Still, it's been an incredible learning experience ...

Then today I ran across Noel Rappin's essay about PragProWriMo:

I'm also completely confident in this statement -- if you are willing to learn new things, and learn them quickly, you don't need to be the lead maintainer and overlord to write a good technical book on a topic. (Though it does help tremendously to have a trusted super-expert as a technical reference.)

Pick something that you are genuinely curious about and that you want to understand really, really well. It's painful to write even a chapter about something that doesn't interest you.

This kind of writing to learn is still not a part of my mentality. I've certainly chosen to teach courses in order to learn -- to have to learn -- something I want to know, or know better. For example, I didn't know any PHP to speak of, so I gladly took on a 5-week course introducing PHP as a scripting language. But I have a respect for books, perhaps even a reverence, that makes the idea of publishing one on a subject I am not expert in unpalatable. I have to much respect for the people who might read it to waste their time.

I'm coming to learn that this probably places an unnecessary limit on myself. Articles like Payne's and Rappin's remind me that I can study something and become expert enough to write a book that is useful to others. Maybe it's time to set out on that path.

Getting people to take this step is one good reason to heed the call of Pragmatic Programmers Writing Month (PragProWriMo), which is patterned after the more generic NaNoWriMo (NaNoWriMo). Writing is like anything else: we can develop a habit that helps us to produce material regularly, which is a first and necessary step to ever producing good material regularly. And if research results on forming habits is right, we probably need a couple of months of daily repetitions to form a habit we can rely on.

So, whether it's a book or blog you have in mind, get to writing.

(Oh, and you really should click through the link in Rappin's essay to Merlin Mann's Making the Clackity Noise for a provocative -- if salty -- essay on why you should write. From there, follow the link to Buffering, where you will find a video of drummer Sonny Payne playing an extended solo for Count Basie's orchestra. It is simply remarkable.)


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

October 20, 2009 8:21 PM

AP Computer Science, Back on Peoples' Minds

A while back -- a year? two? -- the folks at the College Board announced some changes to the way they would offer the AP exams in computer science. I think the plan was to eliminate the B exam (advanced) and redesign the A exam (basic). At the time, there was much discussion among CS educators, at conferences, on the SIGCSE mailing list, and in a few blogs. In 2008 sometime, I read a CACM article by a CS educator on the issue. Her comments were interesting enough that I made some notes in the margins and set the article aside. I also collected a few of my thoughts about the discussions I had read and heard into a text file. I would write a blog article!

But I never did.

I went looking for that text file today. I found it in a folder named recent/, but it is not recent. The last time I touched the file was Tuesday, December 9, 2008.

I guess it wasn't all that urgent after all.

Actually, this isn't all that uncommon for blog article ideas. Many come to mind, but few make it to the screen. Yet this seems different. When the original news was announced, the topic seemed so urgent to many of my close friends and colleagues, and that made it seem urgent to me. The Future of Computer Science in the High Schools was at stake. Yet I could never bring myself to write about the article.

To be honest, it is hard for me to care much about AP. I have been teaching at my university for over seventeen years, and I cannot recall a single student who asked us about AP CS credit. We simply never see it.

Computer programming courses long ago disappeared from most high schools in my state. I am willing to wager that no Iowa schools ever taught computer science qua computer science; if any did, the number was nearly zero. Even back in the early to mid-1990s when dedicated CS courses existed, they were always about learning to program, usually in Basic or Pascal. That made sense, because the best way to help high school students get ready for the first college CS course is to introduce them to programming. Whatever you think about programming as the first course, that is the way most universities work, as well as nearly every college in Iowa. Those programming courses could have been AP courses, but most were not.

Unfortunately, falling budgets, increasing demands in core high school subjects, and a lack of certified CS teachers led many schools to cut their programming courses. If students in my state see a "computer course" in high school these days, it is almost always a course on applications, usually productivity tools or web design.

Maybe I am being self-centered in finding it hard to care about the AP CS exams. We do not see students with AP CS credit or receive inquiries about its availability here. AP CS matters a lot to other people, and they are better equipped to deal with the College Board and the nature and content of the exams.

Then again, maybe I am being short-sighted. Many argue that AP CS is the face of computer science in the high schools, and for better or worse it defines what most people in the K-12 world think CS is. I am less bothered with programming as the focus of that course than many of my friends and colleagues. I'm even sympathetic to Stuart Reges's ardent defense of the current exam structure at his site to preserve it in the penumbra of the University of Washington. But I do think that the course and exam could do a better job of teaching and testing programming than it has over the last decade or so.

Should the course be more than programming, or different altogether? I am open to that, too; CS certainly is more than "just programming". Alas, I am not sure that the academic CS world can design a non-programming high school CS course that satisfies enough of the university CS departments to garner widespread adoption.

But for someone at a university like mine, and in a state like mine, all of the money and mindshare spent on AP Computer Science seems to go for naught. It may benefit the so-called top CS programs, the wealthier school districts, and the students in states where computing already has more of a presence in the high school classroom. In my context? It's a no-op.

Why did I dig a ten-month old text file out for blogging now? There is much ado again about AP CS in light of the Georgia Department of Education announcing that AP Computer Science would no longer count towards high school graduation requirements. This has led to a wide-ranging discussion about whether CS should count as science or math (the TeachScheme! folks have a suggestion for this), the content of the course, and state curriculum standards. Ultimately, the issue comes down to two things: politics, both educational and governmental, and the finite number of hours available in the school day.

So, I will likely return to matters of greater urgency to my university and my state. Perhaps I am being short-sighted, but the simple fact is this. The AP CS curriculum has been around for a long time, and its existence has been of no help in getting my state to require or endorse high school CS courses, certify high school CS teachers, or even acknowledge the existence of computer science as a subject or discipline essential to the high school curriculum. We will continue to work on ways to introduce K-12 students to computer science and to help willing and interested schools to do more and better CS-related material. The AP CS curriculum is likely to have little or no effect on our success or failure.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 13, 2009 9:31 AM

Living with Yesterday

After my long run yesterday, I was both sorer and more tired ('tireder'?) than after last Sunday's big week and fast long run. Why? I cut my mileage last week from 48 miles to 38, and my long run from 22 miles to 14. I pushed hard only during Wednesday's track workout. Shouldn't last week have felt easy, and shouldn't I be feeling relatively rested after an easy long run yesterday?

No, I shouldn't. The expectation I should is a mental illusion that running long ago taught me was an impostor. It's hard to predict how I will feel on any day, especially during training, but the best predictor isn't what I did this week, but last; not today, but yesterday.

Intellectually, this should not surprise us. The whole reason we train today is to be better -- faster, strong, more durable -- tomorrow. My reading of the running literature says that it takes seven to ten days for the body to integrate the effects of a specific workout. It makes sense that the workout can be affecting our body in all sorts of ways during that period.

This is good example of how running teaches us a lesson that is true in all parts of life:

We are what and who are we are today because of what we did yesterday.

This is true of athletic training. It is true of learning and practice more generally. What we practice is what we become.

More remarkable than that this true in my running is that I can know and write about habit of mind as an intellectual idea without making an immediate connection to my running. I often find in writing this blog that I come back around on the same ideas, sometimes in a slightly different form and sometimes in much the same form as before. My mind seems to need that repetition before it can internalize these truths as universal.

When I say that I am living with yesterday, I am not saying that I can live anywhere but in this moment. That is all I have, really. But it is wise to be mindful that tomorrow will find me a product of what I do today.


Posted by Eugene Wallingford | Permalink | Categories: General, Running, Teaching and Learning

October 08, 2009 5:33 PM

Whom Should We Bore?

Alfred Thompson says that we should beware boring the smart kids in our classes, as part of a wider discussion about losing potential CS majors because of what they experience in CS1. I agree with many people that we need to work hard to bore neither our brightest students nor our average students. And it is hard work, indeed. It's hard enough sometimes even when we have only one of these groups of students in class, say, when we teach an honors section.

However, what struck me most as I read through several blog entries and comments in this conversation was this:

It is sad that we have created an educational system in which it's possible to think that this is a legitimate choice: risk losing great students OR risk losing average students.

We have created a system of schools and universities that are organized more for administrative convenience and high throughput than for learning. Actually, our system dates to a time when schools and classes were smaller, and it just doesn't scale very well as either grows.

Our regimented curricula and university systems are the real problem, not the teachers or the students.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 27, 2009 11:19 AM

History Mournful and Glorious

While prepping for my software engineering course last summer, I was re-reading some old articles by Philip Greenspun on teaching, especially an SE course focused on building on-line communities. One of the talks he gives is called Online Communities. This talk builds on the notion that "online communities are at the heart" of most successful applications of the Internet". Writing in 2006, he cites amazon.com, AOL, and eBay as examples, and the three years since have only strengthened his case. MySpace seems to have passed its peak yet remains an active community. I sit hear connected with friends from grade school who have been flocking to Facebook in droves, and Twitter is now one of my primary sources for links to valuable professional articles and commentary.

As a university professor, the next two bullets in his outline evoke both sadness and hope:

  • the mournful history of applying technology to education: amplifying existing teachers
  • the beauty of online communities: expanding the number of teachers

Perhaps we existing faculty are limited by our background, education, or circumstances. Perhaps we simply choose the more comfortable path of doing what has been done in the past. Even those of us invested in doing things differently sometimes feel like strangers in a strange land.

The great hope of the internet and the web is that it lets many people teach who otherwise wouldn't have a convenient way to reach a mass audience except by textbooks. This is a threat to existing institutions but also perhaps an open door on a better world for all of us.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

September 24, 2009 8:07 PM

Always Start With A Test

... AKA Test-Driven X: Teaching

Writing a test before writing code provides a wonderful level of accountability to self and world. The test helps me know what code to write and when we are done. I am often weak and like being able to keep myself honest. Tests also enable me to tell my colleagues and my boss.

These days, I usually think in test-first terms whenever I am creating something. More and more finding myself wondering whether a test-driven approach might work even better. In a recent blog entry, Ron Jeffries asked for analogues of test-driven development outside the software world. My first thought was, what can't be done test-first or even test-driven? Jeffries is, in many ways, better grounded than I am, so rather than talk about accountability, he writes about clarity and concreteness as virtues of TDD. Clarity, concreteness, and accountability seem like good features to build into most processes that create useful artifacts.

I once wrote about student outcomes assessment as a source of accountability and continuous feedback in the university. I quoted Ward Cunningham at the top of that entry, " It's all talk until the tests run.", to suggest to myself a connection to test-first and test-driven development.

Tests are often used to measure student outcomes from courses that we teach at all levels of education. Many people worry about placing too much emphasis on a specific test as a way to evaluate student learning. Among other things, they worry about "teaching to the test". The implication is that we will focus all of our instruction and learning efforts on that test and miss out on genuine learning. Done poorly, teaching to the test limits learning in the way people worry it will. But we can make a similar mistake when using tests to drive our programming, by never generalizing our code beyond a specific set of input values. We don't want to do that in TDD, and we don't want to do that when teaching. The point of the test is to hold us accountable: Can our students actually do what we claim to teach them?

Before the student learning outcomes craze, the common syllabus was the closest thing most departments had to a set of tests for a course. The department could enumerate a set of topics and maybe even a set of skills expected of the course. Faculty new to the course could learn a lot about what to teach by studying the syllabus. Many departments create common final exams for courses with many students spread across many sections and many instructors. The common final isn't exactly like our software tests, though. An instructor may well have done a great job teaching the course, but students have to invest time and energy to pass the test. Conversely, students may well work hard to make sense of what they are taught in class, but the instructor may have done a poor or incomplete job of covering the assigned topics.

I thought a lot about TDD as I was designing what is for me a new course this semester, Software Engineering. My department does not have common syllabi for courses (yet), so I worked from a big binder of material given to me by the person who has taught the course for the last few years. The material was quite useful, but it stopped short of enumerating the specific outcomes of the course as it has been taught. Besides, I wanted to put my stamp on the course, too... I thought about what the outcomes should be and how I might help students reach them. I didn't get much farther than identifying a set of skills for students to begin learning and a set of tools with which they should be familiar, if not facile.

Greg Wilson has done a very nice job of designing his Software Carpentry course in the open and using user stories and other target outcomes to drive his course design. In modern parlance, my efforts in this regard can be tagged #fail. I'm not too surprised, though. For me, teaching a course the first time is more akin to an architectural spike than a first iteration. I have to scope out the neighborhood before I know how to build.

Ideally, perhaps I should have done the spike prior to this semester, but neither the world nor I are ideal. Doing the course this way doesn't work all that badly for the students, and usually no worse than taking a course that has been designed up front by someone who hasn't benefitted from the feedback of teaching the course. In the latter case, the expected outcomes and tests to know they have been met will be imperfect. I tend to be like other faculty in expecting too much from a course the first few times I teach it. If I design the new course all the way to the bottom, either the course is painful for students in expecting too much too fast, or the it is painful for me as I un-design and re-design huge portions of the course.

Ultimately, writing and running tests come back to accountability. Accountability is in short supply in many circles, university curricula included, and tests help us to have it. We owe it to our users, and we owe it to ourselves.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 19, 2009 9:09 PM

Quick Hits with an Undercurrent of Change

Yesterday evening, in between volleyball games, I had a chance to do some reading. I marked several one-liners to blog on. I planned a disconnected list of short notes, but after I started writing I realized that they revolve around a common theme: change.

Over the last few months, Kent Beck has been blogging about his experiences creating a new product and trying to promote a new way to think about his design. In his most recent piece, Turning Skills into Money, he talks about how difficult it can be to create change in software service companies, because the economic model under which they operates actually encourages them to have a large cohort of relatively inexperienced and undertrained workers.

The best line on that page, though, is a much-tweeted line from a comment by Niklas Bjørnerstedt:

A good team can learn a new domain much faster than a bad one can learn good practices.

I can't help thinking about the change we would like to create in our students through our software engineering course. Skills and good practices matter. We cannot overemphasize the importance of proficiency, driven by curiosity and a desire to get better.

Then I ran across Jason Fried's The Next Generation Bends Over, a salty and angry lament about the sale of Mint to Intuit. My favorite line, with one symbolic editorial substitution:

Is that the best the next generation can do? Become part of the old generation? How about kicking the $%^& out of the old guys? What ever happened to that?

I experimented with Mint and liked it, though I never convinced myself to go all the way it. I have tried Quicken, too. It seemed at the same time too little and too much for me, so I've been rolling my own. But I love the idea of Mint and hope to see the idea survive. As the industry leader, Intuit has the leverage to accelerate the change in how people manage their finances, compared to the smaller upstart it purchased.

For those of us who use these products and services, the nature of the risk has just changed. The risk with the small guy is that it might fold up before it spreads the change widely enough to take root. The risk with the big power is that it doesn't really get it and wastes an opportunity to create change (and wealth). I suspect that Intuit gets it and so hold out hope.

Still... I love the feistiness that Fried shows. People with big ideas and need not settle. I've been trying to encourage the young people with whom I work, students and recent alumni, to shoot for the moon, whether in business or in grad school.

This story meshed nicely with Paul Graham's Post-Medium Publishing, in which Graham joins in the discussion of what it will be like for creators no longer constrained by the printed page and the firms that have controlled publication in the past. The money line was:

... the really interesting question is not what will happen to existing forms, but what new forms will appear.

Change will happen. It is natural that we all want to think about our esteemed institutions and what the change means for them. But the real excitement lies in what will grow up to replace them. That's where the wealth lies, too. That's true for every discipline that traffics in knowledge and ideas, including our universities.

Finally, Mark Guzdial ruminates on what changes CS education. He concludes:

My first pass analysis suggests that, to make change in CS, invent a language or tool at a well-known institution. Textbooks or curricula rarely make change, and it's really hard to get attention when you're not at a "name" institution.

I think I'll have more to say about this article later, but I certainly know what Mark must be feeling. In addition to his analysis of tools and textbooks and pedagogies, he has his own experience creating a new way to teach computing to non-majors and major alike. He and his team have developed a promising idea, built the infrastructure to support it, and run experiments to show how well it works. Yet... The CS ed world looks much like it always has, as people keep doing what they've always been doing, for as many reasons as you can imagine. And inertia works against even those with the advantages Mark enumerates. Education is a remarkably conservative place, even our universities.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Software Development, Teaching and Learning

September 10, 2009 9:48 PM

Starting to Think

Today one of my students tweeted that he had started doing something new: setting aside time each to sit and think. This admirable discipline led to a Twitter exchange among students that ended with the sentiment that schools don't teach us how to think.

I'm not sure what I think about this. At one level, I know what they mean. Universities offer a lot of courses in which we expect students to be able to think already, but it's rare when courses aim specifically to teach students how to think. Similarly, we don't often teach students how to read, and in software engineering, we don't often teach students how to do analysis. We demonstrate it and hope they catch on. (Welcome to calculus!)

Some courses take aim at how to think, or at least do on paper, but they tend to be gen ed courses or courses in philosophy that most students don't take, or don't take seriously. Majors courses are the best hope for many, because there is some hope that students will care enough about their majors to want to learn to think like experts in the discipline. In CS, our freshman-level discrete structures course is a place where we purport to help students reason the way computer scientists do. In Software Engineering, I've decided the only way we can possibly learn how to build software is to do it and then analyze what happens. Again, I am not teaching students so much how to think, so much as hoping to put them in a position where they can learn.

This is one part of my attempt to start where students are. But where do we go, or try to go, from there?

That said, lots of people in education and academia spend a lot of their time thinking about how to help students learn to think. Last month, I saved a link to a post at Dangerously Irrelevant on Education and Learning to Think. That post assumes that one goal of education is "higher-order thinking" -- thinking about thinking so that we can learn to do it better. It lists a number of features of higher-order thinking, goals for our students and thus for our courses. The list looks awfully attractive to me right now, because my current teaching assignment aims to prepare prospective software developers to work as professionals, and these are precisely the sorts of skills we would like for them to have before they graduate. Systems analysis and requirements gathering are all about imposing meaning, finding structure in apparent disorder. Building software involves uncertainty. Not everything that bears on the task at hand is known. Judgment and self-regulation are essential skills of the professional, but much of our students' previous fourteen or fifteen years of education have taught them not to self-regulate, if only by giving them so few opportunities to do so. When faced with it for the first time, they tend to balk. Should we be surprised?

There are other possible thinking outcomes we might aim for. A while back, I wrote about a particular HS teacher's experience as a summer CS student in Teaching Is Hard. Mark Guzdial also wrote about that teacher's blog entries, and Alan Kay left a comment on Mark's blog. Alan suggested that one of the key traits that we must help students develop is skepticism. This is one of the defining traits of the scientist, who must question what she sees and hears, run experiments to test ideas, and gather evidence to support claims. One of the great lessons of the Enlightenment is that we all can and should think and act like scientists, even when we aren't "doing science". The methods of science are the most reliable way for us to understand the world in which we live. Skepticism and experiment are the best ways to improve how we think and act.

There is more. People such as Richard Gabriel and Paul Graham tell us that education should help us develop taste. This is one of the defining traits of the maker. Just as all people should be able to think and act like scientists, so should all people be able to think and act creators. This is how we shape the world in which we live. Alan Kay talks about taste, too, in other writings and would surely find much to agree with in Gabriel's and Graham's work.

All this adds up to a pretty tall order for our schools and universities, for our apprenticeships and our workplaces. I don't think it's too much of a cop-out to say that one of the best ways to learn all of these higher-order skills is to do -- act like scientists, act like creators -- and then reflect on what happens.

If this were not enough, the post at Dangerously Irrelevant linked to above closes a quote that sets up one last hurdle for students and teachers alike:

It is a new challenge to develop educational programs that assume that all individuals, not just an elite, can become competent thinkers.

100% may be beyond our reach, but in general I start from this assumption. Like I said, teaching is hard. So is learning to think.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 31, 2009 8:20 PM

Programming Behind the Curtain

Greg Wilson wrote a nice piece recently on the big picture for his Software Carpentry course. His first paragraph captures an idea that is key to the notion of "programming for all" that I and others have been talking about lately:

One of the lessons we learned ... is that most scientists don't actually want to learn how to program--they want solve scientific problems. To many, programming is a tax they have to pay in order to do their research. To the rest, it's something they really would find interesting, but they have a grant deadline coming up and a paper to finish.

Most people don't care about programming. If they do care, they don't like it, or think they don't. They want to do something. Programming has to be a compelling tool, something that makes their lives better as they do the thing they like to do. If it changes how they think about problems and solutions, all the better. When we teach programming to people who are non-CS types, we should present it as such: a compelling tool that makes their lives better. If we teach programming qua programming, we will lose them before we have a chance to make a difference for them.

It turns out that all this is true of CS students, too. Most of them want to do something. And several of the lessons Wilson describes for working with scientists learning to program apply to CS students, such as:

  • When students have never experienced the pain of working with big or long-lived programs that benefit from good abstractions, effective interfaces, and tools such as version control systems, "no amount of handwaving is going to get the idea across".
  • The best way to reach students is to give them programming skills that will pay off for them in the short term. This motivates better than deferred benefit and encourages them to dig in deeper on their own. That's where the best learning happens anyway.
  • Students are often surprised when a course is able to "convey the fundamental ideas needed to make sensible decisions about software without explicitly appearing to do so".

This issue is close to the surface for me as I work on my work on my software engineering course. Undergraduate courses don't usually expose students to the kind of software development projects that are likely to give the right context for learning many of the "big picture" ideas. An in-course project can help, but contemporaneous experience often runs out of sync with the software engineering content of the course. Next semester, they will take a project course in which they can apply what they learn in this course, but little good that does us now!

(Idea for later: Why not teach the project course first and follow with course that teaches techniques that depend on the experience?)

Fortunately, most students trust their professors and give them some leeway when learning skills that are beyond their experience to really grok -- especially when the skills and ideas are a known part of the milieu in which they will work.

Software Engineering this semester offers another complication. There is a wide spread in the level of software and programming experience between the most and least experienced students in the course. What appeals to students at one end of the spectrum often won't appeal students on the other. More fun for the professor...

In closing, I note that many of specific ideas from Wilson's course and Udell's conception of computational thinking apply to a good Software Engineering course for majors. (Several of Wilson's extensions to the body of CT apply less well, in part because they are part of other courses in the CS curriculum and in part because they go beyond the CT that every person probably needs.) I should not be surprised that these basic ideas apply. Wilson is teaching a form of software engineering aimed at a specific audience of developers, and computational thinking is really just a distillation of what all CS students should be learning throughout their undergraduate careers!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 28, 2009 7:28 PM

Agile Course Design

A few days ago, I tweeted:

Everything looks worse in black and white.
Including course designs.

It is easy to have fantasies about a new course in the bloom of summer. There are no constraints. I am perfect. My students are perfect. Campus life is perfect. The course is... perfect.

It's important to stop dreaming. But I have never been good at mapping out a new course in its entirety. Beforehand, I'm still learning what I want to say and how to say it. I still need to learn if how I want to say what I think I need to say will work. The best way for me to go is to start teaching.

Before the course starts, I run my own version of the Planning Game. This helps me to develop a high-level picture of the topics I think are essential to cover and the activities I think are essential for students to do. The result is something akin to a System Metaphor and a set of user stories. I prioritize the topics and select the topics that should come at the beginning of the course. This is akin to Release Planning and prepares me for my first release, which is a one-or two-session unit.

Then I implement. Teaching the first week grounds all that thinking I've been doing in reality. The act of meeting the first session, seeing the students, and interacting with them helps me to commit to certain courses of action and gives me a basis for making other decisions as we go along. Initial feedback to a simple survey helps me to prioritize topics.

With a new prep, especially a course outside my usual teaching range, agile course development is even more important for me. Feedback and adaptation are about the only way I can feel confident that the course has a chance to succeed.

While at Agile 2009, I think, Brian Marick tweeted:

1/2: Agile often requires greybeards to admit how much snotty-nosed 20-year-olds have to teach them.

This is true of the agile community, and so it is true for me to the extent that I engage in agile. It is also true of teaching at a university, so it is true for me on another dimension, too. Finding the boundary between where the professor needs to lead where the students need to lead is the key.

One bit of good news from my first week in class. I have reason to believe that this group of students is willing to participate in the daily business of class and, I suspect, in leading when they can and should. That almost always leads to a better course for them, and a better course -- and experience -- for me.

Thinking about teaching software engineering is going to be interesting. I love to make software, and I love to think about it. Now that is more a part of my day job. There is an extra trade-off to make: spending time explicitly preparing for class versus spending time blogging about what I'm thinking along the way!


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 21, 2009 3:12 PM

Meaning, Motivation, and Learning

In my previous entry, I talked about how hard teaching is. When you consider everything -- from the human psychology to human communication, from differences in student background to differences in student temperament, from home life to campus life, oh yeah, and course content -- students and teachers face a big challenge. That's why I liked the pair of Lost in Syntax articles so much. They remind teachers about all of the other things students face. Even people who teach face these challenges when they become students.

Mark Guzdial also blogged after reading "Lost in Syntax". He focused on the source of a particular disconnect that students and teachers may experience in the classroom: what the course content means. Some students want to build things, and for them meaning comes down to a set of practical skills and a set of engineering practices. Some students want to explore, and for them meaning comes down to a different set of practical skills and a way of thinking about questions and experiments. When the instructor of a course adopts one of these stances and uses its language, she connects well with one group of students and often leaves the other group confused and disoriented.

Some people, even a certain kind of university prof, think all this talk of meaning is falderol. You take a course, you learn the content, and you move on. That attitude ignores reality. Even when all the things Wicked Teacher talks about go right, people learn best when they are motivated. And people are most motivated when they know why they are learning what they are learning, and when that "why?" fits the meaning they seek.

The best teachers start (or try to) with what a course means for students and build the learning experience from there. They also start with what the students already know, or think they know, about the course material. By using the students as the baseline for the course, instructors are more able to motivate students and more likely to engage them in a way that creates real learning. When we use the abstractions of our discipline or our own interests as the baseline, we may well teach a course that excites us, but it will often fail to teach students much of anything.

Good teachers start with what a course means for students, but they don't stop there. The teacher's job is to lead students forward, to help students see bigger ideas than they can initially conceive. Even at the level of practical skills and tools, we need to draw students forward to skills and tools they don't yet appreciate when they first enter the class. Starting with what students know and care about is the way that we build the foundation -- and the trust -- they need to learn beyond their imagination. That's what education is.

This is a tall order for any teacher and why I called my previous entry "Teaching is Hard". It is a whole lot easier to build a course atop the discipline's abstractions or one's own interests than to try to understand the students' minds. Even even when we try to start with the students, it is not an easy task.

Some people say that starting anywhere but the intellectual foundation for the discipline is pandering to the students. But starting with student interests and motivations is not pandering at all -- unless we stop there.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 19, 2009 3:51 PM

Teaching is Hard

Earlier this summer, Wicked Teacher of the West attended a week-long professional development workshop and wrote a two-part reflection on her experience, called "Lost in syntax" [ part 1 | part 2 ]. I have written occasionally here about how useful it is for me as a teacher to be in the learner's shoes every so often in such areas as piano, running, and starting over. Those experiences are analogs, but they require a mapping onto learning computer science. I found Wicked Teacher's reflections especially helpful because she was in the classroom learning CS. I recognize a lot of the symptoms she describes from my students' behaviors in the past. She even captures her bad feelings in a series of object lessons for us teachers.

Great. All I need to do design my course so that it does the right things (such as explaining the big picture) and avoids the obvious pitfalls (such as giving compliments that can be interpreted as indicators of inability), and then watch for signs of problems that are outside my control (such as trouble at home or an unwillingness to ask questions). Simple enough, right?

Right. Much of this is easy in the abstract, but when you get into the rush of the semester, with other courses and other duties tugging at you, a room full of students all in different places, and lots of material to cover in too little time -- well, it suddenly feels a lot harder. Last year, I found myself in the middle of a tough semester and didn't recognize quickly enough that students were not asking questions when they didn't understand. When I am slow to recognize a situation, I am slow to respond. When I am slow to respond, I sometimes miss opportunities to address the issue. Sometimes, I run out of time.

It's a wonder that most teachers don't have the same persistent sense of dread that is expressed in these articles' subtitles: "OMG I'm going to cry in front of all these people".

Still, reflecting in this way -- and reading other peoples' reflections from similar experiences -- is immensely valuable. Simply keeping these lessons in mind over the course of a semester, especially when particular troubles arise, is a good first step toward addressing them in a meaningful way. A little empathy and a little conscious course design can go a long way. The rest is largely a matter of staying alert. I cannot fix every problem or even recognize them all, but paying attention and getting feedback frequently can help me do as well as I can.

I think it is valuable for students to read essays such as Wicked Teacher's. Ultimately, learning is in their hands, and if they can recognize the things that they do which interfere with their own learning, they will be better off. If I can give only one piece of advice from these two reflections, it would be: Ask questions. Ask plenty of questions. Ask them now. Your instructor almost surely wants you to ask. Both of you will be better off if you do.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 10, 2009 2:57 PM

Scratching Itches and the Peril of Chapter 1

In my entry about advice on my advice to students interested in web development, I quoted an alum who dared suggest that undergrads study one language in depth for four years. This seems extreme to me, but it is often useful to examine the axioms on which we base our curricula. Wade's idea fits well with the idea that we would like to teach students enough that they can scratch an itch. I think that this approach overvalues what we can learn from a single language, and any language that takes four years to master is probably the wrong language for undergrads. Even if we accept that our goal is to help students scratch their own itches in the service of motivation, can't we do better? I think so.

My goal here is not to add to the seemingly endless cacophony about which first language(s) we should use in CS1. But I was impressed by a comment Terry Chay made on the entry of his to which I linked in my "itch to scratch" post, about textbooks. Although he is a proponent of PHP for web development, he argues that it isn't suitable for people learning their first programming language. One of the reasons is the the best books about PHP assume too much background.

Consider Welling and Thomson's PHP and MySQL Web Development. It is about web programming and so assumes that students are familiar with HTML, starting and restarting Apache, activating PHP, installing and setting up MySQL, and all the editor- and OS-specific details of writing, storing, and running scripts. That's a lot of hidden prerequisites, and it is one of the challenges we face anytime we try to embed CS1 in a context. Context is context, and students have to have it before they move on.

However, Chay claims that, after its first chapter, PHP and MySQL Web Development "offers more 'immersion' gratification (at the least cost) than any other language's textbook." But it's too late. The first chapter is what beginners see first and what they must make sense of before moving on. Unfortunately,

It's that first chapter that does the first timer in.

Many people who don't teach CS1 and who have never tried writing for that audience underestimate just how important this is, and just how difficult an obstacle it is to overcome. I occasionally see really good books about programming written to solve problems in a specific domain or context that might work well for beginners -- but only as a second course, after someone has taught a course that gets them through the introduction.

Right now I have examination copies of three books sitting on my desk or desktop that are relevant to this discussion.

  • A Web-Based Introduction to Programming, by Mike O'Kane. I requested this when it was the first text book I'd seen that aimed to use PHP in context to teach a CS1 course. Often, books written specifically for CS1 lose the appeal and flavor of books written for motivated practitioners with an itch to scratch. Can this book be as good as the book Chay recommends?
  • Using Google App Engine: Building Web Applications, by Charles Severance. I've see this book criticized for covering too many low-level details, but it aims to be a self-contained introduction to programming. The only way to do that is to cover all the knowledge usually assumed by Chapter 1. The combination of web applications and Google seems like a potential winner.
  • Practical Programming: An Introduction to Computer Science Using Python, by Campbell, Gries, Montojo, and Wilson. This book was motivated at least in part by Greg Wilson's efforts to teach programming to scientists. Unlike the previous two, Practical Programming uses several themes to introduce the ideas of CS and the programming tools needed to play with them. Will the same ideas work as well when brought to the CS1 level, outside of a single unifying context?

I'm disappointed that I haven't taken the time to study these in detail. I am familiar with drafts of Practical Programming after having reviewed them in the books early stages and know it to be a well-written book. But that's not enough to say whether it works as well as I hope. Severance's book also promises big things, but I need to dig deeper to see how well it works. O'Kane's looks like the most traditional CS1 book of the bunch, with a twist: if-statements don't arrive until Chapter 7, and loops until Chapter 9.

Gotta make time! But then there is my own decidedly non-freshman course to think about. Fifteen days and counting...


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 07, 2009 2:18 PM

A Loosely-Connected Friday Miscellany

An Addition to My News Aggregator

Thanks to John Cook, I came across the blog of Dan Meyers, a high school math teacher. Cook pointed to an entry with a video of Meyer speaking pecha kucha-style at OSCON. One of the important messages for teachers conveyed in this five minutes is Be less helpful. Learning happens more often when people think and do than when they follow orders in a well-defined script.

While browsing his archive I came across this personal revelation about the value of the time he was spending on his class outside of the business day:

I realize now that the return on that investment of thirty minutes of my personal time isn't the promise of more personal time later. ... Rather it's the promise of easier and more satisfying work time now.

Time saved later is a bonus. If you depend on that return, you will often be disappointed, and that feeds the emotional grind that is teaching. Kinda like running in the middle. I think it also applies more than we first realize to reuse and development speed in software.

Learning and Doing

One of the underlying themes in Meyers's writing seems to be the same idea in this line from Gerd Binnig, which I found at Physics Quote of Day:

Doing physics is much more enjoyable than just learning it. Maybe 'doing it' is the right way of learning ....

Programming can be a lot more fun than learning to program, at least the way we often try to teach it. I'm glad that so many people are working on ways to teach it better. In one sense, the path to better seems clear.

Knowing and Doing

One of the reasons I named by blog "Knowing and Doing" was that I wanted to explore the connection between learning, knowing, and doing. Having committed to that name so many years ago, I decided to stake its claim at Posterous, which I learned about via Jake Good. Given some technical issues with using NanoBlogger, at least an old version of it, I've again been giving some thought to upgrading or changing platforms. Like Jake, I'm always tempted to roll my own, but...

I don't know if I'll do much or anything more with Knowing and Doing at Posterous, but it's there if I decide that it looks promising.

A Poignant Convergence

Finally, a little levity laced with truth. Several people have written to say they liked the name of my recent entry, Sometimes, Students Have an Itch to Scratch. On a whim, I typed it into Translation Party, which alternately translates a phrase from English into Japanese and back until it reaches equilibrium. In only six steps, my catchphrase settles onto:

Sometimes I fear for the students.

Knowing how few students will try to scratch their own itches with their new-found power as a programmer, and how few of them will be given a chance to do so in their courses on the way to learning something valuable, I chuckled. Then I took a few moments to mourn.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Software Development, Teaching and Learning

August 05, 2009 5:17 AM

More Advice on my Advice: Confidence and Commitment

In my previous entry, I reported on some of the many helpful suggestions I received for advice to prospective students interested in web development. The original entry started with a teaser:

His indecision about careers and school is worthy of its own post; I've seen it in so many other 20- and 30-somethings who are unhappy with their first careers and not sure of where to go next.

Many advisees, traditional and non-traditional students alike, seem to want me -- or anyone, for that matter -- to tell them what to do. Which major should I choose? Which classes should I take? This can be a healthy sort of questioning. One of the roles played by university professors is that of academic advisor, in which we help students explore and develop their interests, refine their goals, and choose a path that will help them. Most of us really like this part of our jobs, and it tends to be underutilized by students.

These same questions can also indicate an unhealthy need for answers. Some students seem to want to surrender the power of choice, and the responsibility that comes with wrong choices. We do the world a huge disservice if our schools or culture create a significant number of young people so unable to control their own destiny, unwilling even to try.

In retrospect, I probably should not write a whole post on this topic. I have no expertise or special insight. Maybe what I report here is not a significant problem, merely the result of a sampling bias or a memory bias. But it feels like a pattern, and sometimes it concerns me.

The real reason I decided not to write much more on this was a response sent to me by a computer science alumnus of ours, David Schmüdde. David is not a professional computer scientist; he is a filmmaker, composer, and teacher, not to mention a blogger worth following. (I mentioned his senior project in a long-ago blog entry.) His e-mail message did not deal with the technical details of my request for advice. It was a personal essay on his experiences learning to program at my university, on studying with me, and on now teaching college students. He reminded me that interest and aptitude are not enough. Students also need confidence that they can succeed and the will to commit to the present moment. All the aptitude in the world can be diluted to nothingness by a lack of confidence or a lack of commitment.

What role does the professor play in this? They can inspire trust. As David wrote, it is really hard for people to hand over two or four years of their lives to a university, even to a program of study. They need to be able trust that what they are doing is worthwhile and has a reasonable chance of leading to happiness or some other form of success.

As much as we like to build up our departments and universities in the eyes of the world, we must remember that people do not trust schools. Not really. They trust people. Sometimes the people they trust are parents or friends or teachers from their high schools. But when committing to a course of study in college, often the people they need to trust are the faculty. The advisors they see at summer orientation. The professors they meet when they seek out more advice. The instructors they see in the classroom.

David hoped that his message would not come across as if he were lecturing me, because surely much of what he was saying must have occurred to me before. Sure, but as I write here on occasion, I often need to be reminded. In this case, I needed to be reminded that what we professors do is not just technical. Perhaps it's not even mostly technical. It's about helping people to grow. Every once in a while, it is good for me to have someone tell me to step back and look at what matters. Thanks to David for taking the time to write me a letter doing just that.

So perhaps the best thing I can do for students who seeks so much direction is to recognize their lack of confidence as natural to the human condition, work to build their trust, and then try to assure them that their efforts will bear fruit -- maybe not the fruit they expect when they start, but fruit worthy of the effort. The key is to commit to a course of action and invest one's mind in it.

What a good time to be having this conversation, with a new academic year on its way!


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 04, 2009 1:45 PM

Advice on my Advice to a Prospective Web Developers

Thanks to everyone who responded to my call for advice on what advice I should give to a student interested in studying at the university with a goal of becoming a web developer. People interpreted the article in lots of different ways and so were able to offer suggestions in many different ways.

In general, most confirmed the gist of my advice. Learn a broad set of technical skills from the spectrum of computer science, because that prepares one to contribute to the biggest part of the web space, or study design, because that's they part the techies tend to do poorly. A couple of readers filled me in on the many different kinds of web development programs being offered by community colleges and technical institutes. We at the university could never compete in this market, at least not as a university.

Mike Holmes wrote a bit about the confusion people have about computer science, with a tip of the hat to Douglas Adams. This confusion does play a role in prospective students' indecision about pursuing CS in college. People go through phases where they think of the computer as replacing an existing technology or medium: calculator, typewriter and printing press, sketchpad, stereo, television. Most of us in computer science seem to do one of two things: latch onto the current craze, or stand aloof from the latest trend and stick with theory. The former underestimates what computing can be and do, while the latter is so general that we appear not to care about what people want or need. It is tough to balance these forces.

In some twittering around my request, Wade Arnold tweeted about the technical side of the issue:

@wallingf Learn Java for 4 years to really know one language well. Then they will pick up php, ruby, or python for domain specific speed

The claim is that by learning a single language really well, a person really learns how to program. After that, she can learn other languages and fill in the gaps, both language-wise and domain-wise. This advice runs counter to what many, many people say, myself included: students should learn lots of different languages and programming styles in order to really learn how to program. I think Wade agrees with that over the long term of a programmer's career. What's unusual in his advice is the idea that a student could or should spend all four years of undergrad study mastering one language before branching out.

A lot of CS profs will dismiss this idea out of hand; indeed, one of the constant complaints one hears in certain CS ed circles is that too many schools have "gone Java" to the exclusion of all other languages and to the lasting detriment of their students. My department's curriculum has, since before I arrived, required students to study a single language for their entire first year, in an effort to help students learn one language well enough that they learn how to program before moving on to new languages and styles. When that language was, say, Pascal, students could pretty well learn the whole language and get a lot of practice using it. C is simple enough for that purpose, I suppose, but C++, Ada, and Java aren't. If we want students to master those languages at a comparable level, we might well need four years. I think that says more about the languages we use than about students learning enough programming in a year to be ready to generalize and fill in gaps with new languages and styles.

This entry has gotten longer than I expected, yet I have more to say. I'll write more in the days to come.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 03, 2009 8:51 PM

Sometimes, Students Have an Itch to Scratch

Mark Guzdial recently wrote:

It's interesting is how the contextualized approach [to teaching intro CS] impacted future student choice. Did we convince students that computing is interesting? I don't think so. Instead, we convinced students that computing was relevant to the things that they already had value for, that it had utility for them. Thus, we had students who were willing to take "More Media Computation" classes (as long as we kept explaining the utility), but not "More Computer Science" classes.

This came back to mind while I was reading Terry Chay's 5 million, which was recommended to me by alumnus Tony Bibbs in response to my recent request for assistance. While discussing how to recommend what language programmers should learn first, Chay wrote something in the same vein. I have cleaned up what I assume to be a typographical glitch in what he posted:

You know you can learn it in a classroom, but immersion is a much faster way to learn.

The best way to learn to program is to have an itch that needs scratching.

Together, these passages brought to mind advice that Alistair Cockburn once gave for introducing agile software development ideas into an organization. I recall his advise as this: Don't go into an organization talking about your solutions. Ask them about the problems they are having producing software. What causes them pain? Then look for a change they could make that would reduce or eliminate the pain. Often times, an agile practice will be the ticket, and this will serve as an opportunity to help them do something that helps them, not something that merely pulls a play from the playbook you are trying to sell.

I once ran across a quote on a blog at JavaRanch that seems not to exist anymore> which talked about the change in mindset that should accompany adopting Alistair's advice:

Changing other people in ways that I deem appropriate, that's hard. Asking people how they want to change, and how I can help them change, that's easy. Why don't I do more of the latter?

Those of us who teach students to program and who hope to attract creative and interested minds to CS cannot rely just on scratching the itches that students have, but that does seem like a useful prong in a multi-pronged effort. As Mark points out, many students interested in programming within a context are really interested in that context, not in programming or CS more generally. That's okay. Teaching a broad set of people how to do media computation is valuable on its own. But there are students like the ones Terry Chay describes who will immerse themselves in programming to scratch their own itches and then find they want to go deeper or broader than the one context.

Even with all the thinking out loud I do here, I am not sure yet which students will be the ones who go all the way with CS or how we can begin to identify them. Perhaps the best thing we can do is to draw them in with their own interests and see what happens. Teaching a generic, CS-centric intro sequence is not the best way to reach all students, even the ones who come in thinking they want to do CS. Empowering students to solve problems that matter to them seems like a promising way for us to approach the issue.

One reader commented on my CS/basketball fantasy that a CS1 course built around an obsession with sports would be a frivolous waste of time. That is probably true, but I have seen a fair number of students over the years in our CS1 courses and in Basic programming courses who invested significant numbers of hours into writing programs related to football, baseball, and basketball. I'm glad that those students engaged themselves with a programming language and set out to solve problems they cared about. If I could engage such students with my assignments, that would be an excellent use of our time in class, not a frivolous waste. I may not want to build an entire course around a particular student's interest in ranking NFL teams, but I am always looking for ways to incorporate student interests into what we need to do anyway.

Among other things, teachers need to keep in mind that students have itches, too. It never hurts to ask them every once in a while what those itches are.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 28, 2009 12:40 PM

CS in Everything: On the Hardwood

The economics blog Marginal Revolution has an occasional series of posts called "Markets in Everything", in which the writers report examples of markets at work in various aspects of everyday life. I've considered doing something similar here with computing, as a way to document some concrete examples of computational thinking and -- gasp! -- computer programs playing a role how we live, work, and play. Perhaps this will be a start.

Courtesy of Wicked Teacher of the West, I came across this story about NBA player Shane Battier, who stands out in an unusual way: by not standing out with his stats. A parallel theme of the story is how the NBA's Houston Rockets are using data and computer analysis in an effort to maximize their chances of victory. The connection to Battier is that the traditional statistics we associate with basketball -- points, rebounds, assists, blocked shots, and the like -- do not reflect his value. The Rockets think that Battier contributes far more to their chance of winning than his stat line shows.

The Rockets collect more detailed data about players and game situations, and Battier is able to use it to maximize his value. He has developed great instincts for the game, but he is an empiricist at heart:

The numbers either refute my thinking or support my thinking, and when there's any question, I trust the numbers. The numbers don't lie.

For an Indiana boy like myself, nothing could be more exciting than knowing that the Houston Rockets employ a head of basketball analytics. This sort of data analysis has long been popular among geeks who follow baseball, a game of discrete events in which the work of Bill James and like-minded statistician-fans of the American Pastime finds a natural home. I grew up a huge baseball fan and, like all boys my age, lived and died on the stats of my favorite players. But Indiana is basketball country, and basketball is my first and truest love. Combining hoops with computer science -- could there be a better job? There is at least one guy living the dream, in Houston.

I have written about the importance of solving real problems in CS courses, and many people are working to redefine introductory CS to put the concepts and skills we teach into context. Common themes include bioinformatics, economics, and media computation. Basketball may not be as important as sequencing the human genome, but it is real and it matters to a enough people to support a major entertainment industry. If I were willing to satisfy my own guilty pleasures, I would design a CS 1 course around Hoosier hysteria. Even if I don't, it's comforting to know that some people are beginning to use computer science to understand the game better.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

July 20, 2009 6:29 PM

Talking and Doing

I often find myself at meetings with administrator types, where I am the only person who has spent much time in a classroom teaching university students, at least recently. When talk turns to teaching, I sometimes feel like a pregnant woman at an ob-gyn convention. They may be part of a university and may even have taught a workshop or specialty course, but they don't usually know what it's like to design and teach several courses at a time, over many semesters in a row. That doesn't always stop them from having deeply-held and strong opinions about how to teach. Having been a student once isn't enough to know what it's like to teach.

I have had similar experiences as a university professor among professional software developers. All have been students and have learned something about teaching and learning from their time in school. But their lessons are usually biased by their own experiences. I was a student once, too, but that prepared me only a bit for being a teacher... There are so many different kinds of people in a classroom, so many different kinds of student. Not very many are just like me. (Thank goodness!)

Some software developers have taught. Many give half- or full-day tutorials at conferences. Others teach week-long courses on specific topics. A few have even taught a university course. Even still, I often don't feel like we are talking about the same animal when we talk about teaching. Teaching a training course for professional developers is not the same as teaching a university course to freshmen or even seniors. Teaching an upper-division or graduate seminar bears little resemblance to an elective course for juniors, let alone a class of non-majors. Even teaching such a course as an adjunct can't deliver quite the same experience as teaching a full load of courses across the spectrum of our discipline, day in and day out for a few years in a row. The principle of sustainable pace pops up in a new context.

As with administrators, lack of direct experience doesn't always stop developers from having deeply-held and strong opinions about what we instructors should be doing in the classroom. It creates an interesting dialectic.

That said, I try to learn whatever I can from the developers with whom I'm able to discuss teaching, courses, content, and curricula. One of the reasons I so enjoy PLoP, ChiliPLoP, and OOPSLA is having an opportunity to meet reflective individuals who have thought deeply about their own experiences as students and who are willing to offer advice on how I can do better. But I do try to step back from their advice and put it into the context of what it's like to teach in a real university, not one we've invented. Some ideas sound marvelous in the abstract but die a grisly death on the banks of daily university life. Revolution is easy when the battlefield is in our heads.

When it comes to working with software developers, I am more concerned that they will feel like the pregnant woman when they discuss their area of expertise with me and my university colleagues. One of my goals is not to be "that guy" when talking about software development with developers. I hope and prefer to speak out of personal and professional experience, rather than a theory I read in a book or a blog, or something another professor told me.

What we teach needs to have some connection to what developers do and what our students will need to do when they graduate. There is a lot more to a CS degree than just cutting code, but when we do talk about building software, we should be as accurate and as useful as we can be. This makes teaching a course like software engineering a bigger challenge for most CS profs than the more theoretical or foundational material such as algorithms or even programming languages.

One prescription is the same as above: I listen and try to learn whatever I can from developers when we talk about building software. Conferences like PLoP, ChiliPLoP, and OOPSLA give me opportunities I would not have otherwise, and I listen to alumni tell me about what they do -- and how -- whenever I can. I still have to sift what I learn into the context of the university, but it makes for great raw material.

Another prescription is to write code and use the tools and practices the people in industry use. Staying on top of that fast-moving game gets harder all the time. The world software is alive and changing. We professors have to keep at it. Mostly, it's a matter of us staying alive, too.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 14, 2009 1:06 PM

Is He Talking About News, or Classroom Content?

Seth Godin says:

People will not pay for by-the-book rewrites of news that belongs to all of us. People will not pay for yesterday's news, driven to our house, delivered a day late, static, without connection or comments or relevance. Why should we?

Universities may not be subject to the same threats as newspapers, due in some measure to

  • their ability to aggregate intellectual capital and research capacity,
  • their privileged status in so many disciplines as the granters of required credentials, and
  • frankly, the lack of maturity, initiative, and discipline of their primary clientele.

But Godin's quote ought to cause a few university professors considerable uneasiness. In the many years since I began attending college as an undergrad, I have seen courses at every level and at every stop that fall under the terms of this rebuke.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 13, 2009 3:23 PM

Patterns as Compression Technology

In trying to understand the role patterns and pattern languages play both in developing software and in learning to develop software, I often look for different angles from which to look at patterns. I've written the idea of patterns as descriptive grammar and the idea of patterns as a source of freedom in design. Both still seem useful to me as perspectives on patterns, and the latter is among the most-read articles on my blog. The notion of patterns-as-grammar also relates closely to one of the most commonly-cited roles that patterns play for the developer or learner, that of vocabulary for describing the meaningful components of a program.

This weekend, I read Brian Hayes's instructive article on compressive sensing, The Best Bits. Hayes talks about how it is becoming possible to imagine that digital cameras and audio recorders could record compressed streams -- say, a 10-megapixel camera storing a 3MB photo directly rather than recording 30MB and then compressing it after the fact. The technique he calls compressive sensing is a beautiful application of some straightforward mathematics and a touch of algorithmic thinking. I highly recommend it.

While reading this article, though, the idea of patterns as vocabulary came to mind in a new way, triggered initially by this passage:

... every kind of signal that people find meaningful has a sparse representation in some domain. This is really just another way of saying that a meaningful signal must have some structure or regularity; it's not a mere jumble of random bits.

an optical illusion -- can you see it?

Programs are meaningful signals and have structure and regularity beyond the jumble of seemingly random characters at the level of the programming level. The chasm between random language stuff and high-level structure is most obvious when working with beginners. They have to learn that structure can exist and that there are tools for creating it. But I think developers face this chasm all the time, too, whenever they dive into a new language, a new library, or a new framework. Where is the structure? Knowing it is there and seeing it are too different matters.

The idea of a sparse representation is fundamental to compression. We have to find the domain in which a signal, whether image or sound, can be represented in as few bits as possible while losing little or even none of the signal's information. A pattern language of programs does the same thing for a family of programs. It operates at a level (in Hayes' terms, in a domain) at which the signal of the program can be represented sparsely. By describing Java's I/O stream library as a set of decorators on a set of concrete streams, we convey a huge amount of information in very few words. That's compression. If we say nothing else, we have a lossy compression, in that we won't be able to reconstruct the library accurately from the sparse representation. But if we use more patterns to describe the library (such as Abstract Class and "Throw, Don't Catch"), we get a representation that pretty accurately captures the structure of the library, if not the bit-by-bit code that implements it.

This struck me as a useful way to think about what patterns do for us. If you've seen other descriptions of patterns as a means for compression, I'd love to hear from you.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

July 09, 2009 7:59 AM

Agile Moments: TDD and the Affordances of Programming

I recently ran across a link to a Dan Bricklin article from a few years ago, Why Johnny can't program. (I love the web!) Bricklin discusses some of the practical reasons why more people don't program. As he points out, it's not so much that people can't program as that they won't or choose not to program. Why? Because the task of writing code in a textual language isn't fun for everyone. What Bricklin calls "typed statement" programming fails all of Don Norman's principles of good design: visibility, good conceptual model, good mappings, and full and continuous feedback. Other programming models do better on these counts -- spreadsheets, rule-based expert system shells, WYSIWYG editors in which users generate HTML through direct manipulation -- and reach a wider audience. Martin Fowler recently talked about this style, calling it illustrative programming.

I had an agile moment as I read this paragraph from Bricklin about why debugging is hard:

One of the problems with "typed-statement" systems is that even though each statement has an effect, you only see the final result. It is often unclear which statement (or interaction of statements) caused a particular problem. With a "Forms" or direct manipulation system, the granularity is often such that each input change has a corresponding result change.

When we write unit tests for our code at about the same time we write the code, we improve our programming experience by creating intermediate results that help us to debug. But there's more. Writing tests helps us to construct a conceptual model of the program we are writing. They make visible the intended state of the program, and help us to map objects and functions in the code onto the behavior of the program at run-time. When we take small steps and run our tests frequently, they give us full and continuous feedback about the state of our program. Best of all, this understanding is recorded in the tests, which are themselves code!

In some ways, test-driven programming may improve on styles where we don't type statements. By writing tests, we participate actively in creating the model of our program. We are not simply passive users of someone else's constraint engine or inference engine. We construct our understanding as we construct our program.

Then again, some people don't need or want to write the reasoning component, so we need to provide access to tools they can use to be productive. Spreadsheets did that for regular folks. Scripting languages do it for programmers. Some people complain about scripting languages because they lack type safety, hide details, and are too slow. But the fact is that programmers are people, too, and they want tools that put them into a flow. They want languages that hit them in their sweet spot.

Bricklin concludes:

From all this you can see that the way a system requires an author to enter instructions into the computer affects the likelihood of acceptance by regular people. The more constrained the instructions the better. The more the instructions are clearly tied to various results the better. The more obvious and quickly the results may be examined the better.

TDD does all this, and more. It makes professional programmers more productive by providing better cognitive support for mental tasks we have to perform anyway. If we use TDD properly as we teach people to program, perhaps it can help us hit the sweet spot for more people, even in a "typed statement" environment.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 08, 2009 4:38 PM

Miscellaneous Notes on Using Computers

Good Question

Last week, I gave a talk on careers in computing to thirty or so high school kids in a math and science program on campus this summer. Because it's hard to make sense out of computing careers if one doesn't even know what computer science is, I started off with half an hour or so talking about CS. Part of that was distinguishing between discovering things, creating things, and studying things.

At the end, we had time for the usual question-and-answer session. The first question came from a young man who had looked quite disinterested throughout the talk: What is the most important thing you have discovered or invented?

Who says kids don't pay attention?

The Age of Fire

Yesterday, I took my laptop with me to do advising at freshmen orientation. It allows me to grab course enrollment data of the university web site (processed, but raw enough), rather than look at the print-outs the advising folks provide every morning. With that data and little more than grep and sorting on columns, I can find courses for my students much more easily than thumbing back and forth in the print-outs. And the results are certainly of a higher quality than my thumbing would give.

The looks on the other advisors' faces at our table made me think of how a group of prehistoric men must have looked when one of their compatriots struck two rocks together to make fire.

Computer Science's Dirty Little Secret

An alumnus sent me a link to an MSNBC article about Kodu, a framework for building Xbox-like games aimed at nine-year-olds.

I like how Matthew MacLaurin, lead developer, thinks:

MacLaurin ... says he hopes it doesn't just teach programming, but teaches us to appreciate programming as a modern art form.

(Emphasis added.)

The piece talks about "the growing importance of user-generated content in gaming" and how most people assume "that all of the creativity in video games takes place in the graphics and art side of the gaming studios, while the programming gets done by a bunch of math guys toiling over dry code. Author Winda Benedetti writes (emphasis added):

I had asked [McLaurin] if [Kodu] was like putting chocolate on broccoli -- a means of tricking kids into thinking the complex world of programming was actually fun.

But he insists that's not the case at all.

"It's teaching them that it was chocolate the whole time, it just looked like a piece of broccoli," he explains. "We're really saying that programming is the most fun part of creating games because of the way it surprises you. You do something really simple, and you get something really complex and cool coming back at you."

Programming isn't our dirty little secret. It is a shining achievement.

Afterthoughts

I am still amazed when lay people respond to me using a computer to solve daily problems, as if I have brought a computation machine from the future. Shocking! Yes, I actually use it to compute. The fact that people are surprised even when a computer scientist uses it that way should help us keep in mind just how little people understand what computer science is and what we can do with it.

Have an answer to the question, "What is the most important thing you have made?" ready at hand, and suitable for different audiences. When someone asks, that is the moment when you might be able to change a mind.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 07, 2009 4:39 PM

What Remains Is What Will Matter

Quoted by Harry Lewis in Excellence Without a Soul:

A liberal education is what remains after you have forgotten the facts that were first learned while becoming educated.
-- Jorge Dominguez

I think this applies not only to a liberal education broadly construed but also to specialized areas of study -- and even to a "narrow" technical field such as computer science. What is left five or ten years from now will be the education our students have received. Students may not remember the intricacies of writing an equals method in Java. I won't mind one bit. What will they remember? This is the true test of the courses we create and of the curricula we design. Let's set our sights high enough to hit the target we seek.

Lately I've been trying to swear off scare quotes and other writing affectations. I use them above with sincere intention. Computer science is not as narrow as most people think. Students usually think it is, and so do many of their parents. I hope that what we teach and do alleviates this misconception. Sadly, too often those of us who study computer science -- and teach it -- think of the discipline too narrowly. We may not preach it that way, but we often practice it so.

With good courses, a good curriculum, and a little luck, students may even remember some of their CS education. I enjoyed reading how people like Tim O'Reilly have been formed by elements of their classical classical education. How are we forming our students in the spirit of a classical CS education? If any discipline needs to teach enduring truths, it is ours! The details disappear with every new chip, every new OS, every new software trend.

What is most likely to remain from our stints in school are habit. Sure, CS students must take with them some facts and truths: trade-offs matter; in some situations, the constant dominates the polynomial; all useful programming languages have primitives, means for combining them, and means for abstracting away detail. Yes, facts matter, but our nature is tied to its habits. I said last time that publishing the data I collect and use would be a good habit because habits direct how we think. I am a pragmatist in the strong sense that knowledge is habit of thought. Habit of action creates habit of thought. Knowledge is not the only value born in habit. As Aristotle taught us,

Excellence is an art won by training and habituation. We do not act rightly because we have virtue or excellence, but rather we have those because we have acted rightly.

Even an old CS student can remember some of his liberal arts education...

Finally, we will do well to remember that students learn as much or more from the example we set as from what we say in the classroom, or even in our one-on-one mentoring. All the more reason to create habits of action we don't mind having our students imitate.

~~~~

Note. Someone might read Excellence Without a Soul and think that Harry Lewis is a classicist or a humanities scholar. He is a computer scientist, who just happened to spend eight years as Dean of Harvard College. Dominguez, whom Lewis quotes, is a political science professor at Harvard, but he claims to be paraphrasing Alfred North Whitehead -- a logician and mathematician -- in the snippet above. Those narrow technical guys...

My favorite Lewis book is, in fact, a computer science book, Elements of the Theory of Computation, which I mentioned here a while back. I learned theory of computation from that book -- as well as a lot of basic discrete math, because my undergrad CS program didn't require a discrete course. Often, we learn well enough what we need to learn when we need it. Elements remains one of my favorite CS books ever.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 03, 2009 8:31 AM

Thinking About Testing and Software Engineering

I've been buried in a big project on campus for the last few months. Yesterday, we delivered our report to the president. Ah, time to breathe, heading into a holiday weekend! Of course, next week I'll get back to my regular work. Department stuff. Cleaning my desk. And thinking about teaching software engineering this fall.

A bit of side reading found via my Twitter friends has me thinking about testing, and the role it will play in the course. In the old-style software engineering course, testing is a "stage" in the "process", which betrays a waterfall view of the world even when the instructor and textbook say that they encourage iterative development. But testing usually doesn't get much attention in such courses, maybe one chapter that describes the theory of testing and a few of the kinds of testing we need to do.

It seems to me that testing can take a bigger place in the course, if only because it exemplifies the sort of empiricism that we should all engage in as software developers. When we test, we run experiments to gather evidence that our program works as specified. We should adopt a similar mindset about how we build our programs. How do we know that our design is a good one? Or that our team is functioning well? Or that we are investing enough time and energy in writing tests and refactoring our code?

That's one reason I like Joakim Karlsson's post about the principle of locality in code changes. There may be ways that he can improve his analysis, but the most important thing about this post is that he analyzed code at all. He had a question about how code edits work, so he wrote a program to ask subversion repositories for the answer. That's so much better than assuming that his own hypothesis was correct, or that conventional wisdom was.

In regard to the testing process itself, Michael Feathers wrote a piece on "canalizing" design that points out a flaw in how we usually test our code. We write tests that are independent of one another in principle but that our test engines always run in the same order. This inadvertent weakness of sequential code creates an opportunity for programmers to write code that takes advantage of the implicit relationship between tests. But it's not really an advantage at all, because we then have dependencies in our code that we may not be aware of and which should not exist at all. Feathers suggests putting the tests in a set data structure and executing them them from there. At least then the code makes explicit that there is no implied order to the tests, which reminds the programmers who modify the code later that they should not depend on the order of test execution.

(I also like this idea for its suggestion that programs can and other should be dynamic structures, not dead sequences of text. Using a set of tests also moves us a step closer to making our code work well in a parallel environment. Explicit and implicit sequencing in programs makes it hard to employ the full power of multicore systems, and we need to re-think how we structure our programs if we want to break away from purely sequential machines. The languages guy in me sees some interesting applications of this idea in how write our compilers.)

Finally, I enjoyed reading Gojko Adzic's description of Keith Braithwaite's "TDD as if you mean it" exercise. Like the programming challenges I have described, it asks developers to take an idea to its extreme to break out of habits and to learn just how the idea feels and what it can give. Using tests to drive how the writing of code is more different from what most of us do than we usually realize. This exercise can help you to see just how different -- if you have an exercise leader like Keith to keep you honest.

However, I disagree with something Keith said in response to a comment about the relationship between TDD and functional programming:

I'm firmly convinced that sincere TDD leads one towards a functional style.

TDD will drive you to the style whose language you think.

There will be functional components to your solution to support the tests, and some good OOP has a functional feel. But in my experience you can end up with very nice objects in an object-oriented program as a result of faithfully-executed TDD.

Another of Braithwaite's comments saved the day, though. He credits Allan Watts for this line that captures his intent in designing exercises like this:

I came not as a salesman but as an entertainer. I want you to enjoy these ideas because I enjoy them.

Love this! He has a scholar's heart.

There is a lot more to testing that unit tests or regression testing. Finding ways to introduce students to the full set of ideas while also giving them a visceral sense of testing in the trenches is a challenge. I have to teach enough to prepare a general audience and also prepare students who will go on to take our follow-up course, Software Testing. That's a course that undergraduates at most schools don't have the opportunity to take, a strong point of our program. But that course can't be an excuse not to do testing well in the software engineering course. It's not a backstop; it's new ballgame.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 26, 2009 4:01 PM

The Why of X

Where did the title of my previous entry come from? Two more quick hits tell a story.

Factoid of the Day

On a walk the other night, my daughter asked why we called variables x. She is reviewing some math this summer in preparation to study algebra this fall. All I could say was, "I don't know."

Before I had a chance to look into the reason, one explanation fell into my lap. I was reading an article called The Shakespeare of Iran, which I ran across in a tweet somewhere. And there was an answer: the great Omar Khayyam.

Omar was the first Persian mathematician to call the unknown factor of an equation (i.e., the x) shiy (meaning thing or something in Arabic). This word was transliterated to Spanish during the Middle Ages as xay, and, from there, it became popular among European mathematicians to call the unknown factor either xay, or more usually by its abbreviated form, x, which is the reason that unknown factors are usually represented by an x.

However, I can't confirm that Khayyam was first. Both Wikipedia and another source also report the Arabic language connection, and the latter mentions Khayyam, but not specifically as the source. That author also notes that "xenos" is the Greek word for "unknown" and so could be the root. However, I also haven't found a reference for this use of x that predates Khayyam, either. So may be.

My daughter and I ended up with as much of a history lesson as a mathematical terminology lesson. I like that.

Quote of the Day

Yesterday afternoon, the same daughter was listening in on a conversation between me and a colleague about doing math and science, teaching math and science, and how poorly we do it. After we mentioned K-12 education and how students learn to think of science and math as "hard" and "for the brains", she joined the conversation with:

Don't ask teachers, 'Why?' They don't know, and they act like it's not important.

I was floored.

She is right, of course. Even our elementary school children notice this phenomenon, drawing on their own experiences with teachers who diminish or dismiss the very questions we want our children to ask. Why? is the question that makes science and math what they are.

Maybe the teacher knows the answer and doesn't want to take the time to answer it. Maybe she knows the answer but doesn't know how to answer it in a way that a 4th- or 6th- or 8th-grader can understand. Maybe he really doesn't know the answer -- a condition I fear happens all too often. No matter; the damage is done when the the teacher doesn't answer, and the child figures the teacher doesn't know. Science and math are so hard that the teacher doesn't get it either! Better move on to something else. Sigh.

This problem doesn't occur only in elementary school or high school. How often do college professors send the same signal? And how often do college professors not know why?

Sometimes, truth hits me in the face when I least expect it. My daughters keep on teaching me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

June 25, 2009 9:48 PM

X of the Day

Quick hits, for different values of x, of course, but also different values of "the day" I encountered them. I'm slow, and busier than I'd like.

Tweet of the Day

Courtesy of Glenn Vanderburg:

Poor programmers will move heaven and earth to do the wrong thing. Weak tools can't limit the damage they'll do.

Vanderburg is likely talking about professional programmers. I have experienced this truth when working with students. At first, it surprised me when students learning OOP would contort their code into the strangest configurations not to use the OO techniques they were learning. Why use a class? A fifty- or hundred-line method will do nicely.

Then, students learning functional programming would seek out arcane language features and workarounds found on the Internet to avoid trying out the functional patterns they had used in class. What could have been ten lines of transparent Scheme code in two mutually recursive functions became fifteen or more of the most painfully tortured C code wrapped in a thin veil of Scheme.

I've seen this phenomenon in other contexts, too, like when students take an elective course called Agile Software Development and go out of their way to do "the wrong thing". Why bother with those unit tests? We don't really need to try pair programming, so we? Refactor -- what's that?

This feature of programmers and learners has made me think harder trying to help them see the value in just trying the techniques they are supposed to learn. I don't succeed as often as I'd like.

Comic of the Day

Hammock dwellers, unite!

2009-06-23 Wizard of Id on professors

If only. If only. When does summer break start?


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

June 24, 2009 8:13 AM

Brains, Patterns, and Persistence

I like to solve the Celebrity Cipher in my daily paper. Each puzzle is a mixed alphabet substitution cipher on a quote by someone -- a "celebrity", loosely considered -- followed by the speaker's name, sometimes prefixed with a title or short description. Lately I've been challenging myself to solve the puzzle in my head, without writing any letters down, even once I'm sure of them. Crazy, I know, but this makes the easier puzzles more challenging now that I have gotten pretty good at solving them with pen in hand.

(Spoiler alert... If you like to do this puzzle, too, and have not yet solved the June 22 cipher, turn away now. I am about to give the the answer away!)

Yesterday I was working on a puzzle, and this was the speaker phrase:

IWHNN TOXFZRXNYHO NXKJHSSA YXOYXEBUHO

I had looked at the quote itself for a couple of minutes and so was operating on an initial hypothesis that YWH was the word the. I stared at the speaker for a while... IWHNN would be IheNN. Double letters to end the third word, which is probably the first name. N could be s, or maybe l. s... That would be the first letter of the first name.

And then I saw it, in whole cloth:

Chess grandmaster Savielly Tartakower

Please don't think less of me. I'm not a freak. Really.

a picture of Savielly Tartakower

How very strange. I have no special mental powers. I do have some experience solving these puzzles, of course, but this phrase is unusual both in the prefix phrase and in the obscurity of the speaker. Yes, I once played a lot of chess and did know of Tartakower, a French-Polish player of the early 20th century. But how did I see this answer?

The human brain amazes me almost every day with its ability to find, recognize, and impose patterns on the world. Practice and exposure to lots and lots of data is one of the ways it learns these patterns. That is part of how I am able to solve these ciphers most days -- experience makes patterns appear to me, unbidden by conscious thought. There may be other paths to mastery, but I know of no other reliable substitute for practice.

What about the rest of the puzzle? From the letter pairs in the speaker phrase, I was able to reconstruct the quote itself with little effort:

Victory goes to the player who makes the next-to-last mistake.

Ah, an old familiar line. If we follow this quote to its logical conclusion, it offers good advice for much of life. You never know which mistake will be the next-to-last, or the last. Keep playing to win. If you learn from your mistakes, you'll start to make fewer, which increases the probability that your opponent will make the last mistake of the game.

Even when in non-adversarial situations, or situations in which there is no obvious single adversary, this is a good mindset to have. People who embrace failure persist. They get better, but perhaps more importantly they simply survive. You have to be in the game when your opportunity comes -- or when your opponent makes the ultimate mistake.

Like so many great lines, Tartakower's is not 100% accurate in all cases. As an accomplished chessplayer, he certainly knew that the best players can lose without ever making an obvious mistake. Some of my favorite games of all time are analyzed in My Sixty Memorable Games, by Bobby Fischer himself. It includes games in which the conquered player never made the move that lost. Instead, the loser accreted small disadvantages, or drifted off theme, and suddenly the position was unfavorable. But looking back, Fischer could find no obvious improvement. Growing up, this fascinated me -- the loser had to make a mistake, right? The winner had make a killer move... Perhaps not.

Even still, the spirit of Tartakower's advice holds. Play in this moment. You never know which mistake will be the next-to-last, or the last. Keep playing.

At this time of year, when I look back over the past twelve months of performing tasks that do not come naturally to me, and looking ahead to next year's vision and duties, this advice gives me comfort.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 11, 2009 8:24 PM

Revolution Out There -- and Maybe In Here

(Warning: This is longer than my usual entry.)

In recent weeks I have found myself reading with a perverse fascination some of the abundant articles about the future of newspapers and journalism. Clay Shirky's Newspapers and Thinking the Unthinkable has received a deserving number of mentions in most. His essay reminds us, among other things, that revolutions change the rules that define our world. This means that living through a revolution is uncomfortable for most people -- and dangerous to the people most invested in the old order. The ultimate source of the peril is lack of imagination; we are so defined by the rules that we forget they are not universal laws but human constructs.

I'm not usually the sort of person attracted to train wrecks, but that's how I feel about the quandary facing the newspaper industry. Many people in and out of the industry like to blame the internet and web for the problem, but it is more complicated than that. Yes, the explosion of information technology has played a role in creating difficulties for traditional media, but as much as it causes the problems, I think it exposes problems that were already there. Newspapers battle forces from all sides, not the least of which is the decline -- or death? -- of advertising, which may soon be known as a phenomenon most peculiar to the 20th century. The web has helped expose this problem, with metrics that show just how little web ads affect reader behavior. It has also simply given people alternatives to media that were already fading. Newspapers aren't alone.

This afternoon, I read Xark's The Newspaper Suicide Pact and was finally struck by another perverse thought, a fear because it hits closer to my home. What if universities are next? Are we already in a decline that will become apparent only later to those of us who are on the inside?

Indications of the danger are all around. As in the newspaper industry, money is at the root of many problems. The cost of tuition has been rising much faster than inflation for a quarter of a century. At my university, it has more than doubled in the 2000s. Our costs, many self-imposed, rise at the same time that state funding for its universities falls. For many years, students offset the gap by borrowing the difference. This solution is bumping into a new reality now, with the pool of money available for student loans shrinking and the precipitous decline in housing equity for many eroding borrowing ability. Some may see this as a good thing, as our students have seen a rapid growth in indebtedness at graduation, outpacing salaries in even the best-paying fields. Last week, many people around here were agog at a report that my state's university grads incur more student loan debt than any other state's. (We're #1!)

Like newspapers, universities now operate in a world where plentiful information is available on-line. Sometimes it is free, and other times its is much less expensive than the cost of taking a course on the subject. Literate, disciplined people can create a decent education for themselves on-line. Perhaps universities serve primarily the middle and lower tier of students, who haven't the initiative or discipline to do it on their own?

I have no numbers to support these rash thoughts, though journalists and others in the newspaper industry do have ample evidence for fear. University enrollments depend mostly on the demographics of their main audience: population growth, economics, and culture. Students also come for a social purpose. But I think the main driver for many students to matriculate is industry's de facto use of the college degree as the entry credential to the workplace. In times of alternatives and tight money, universities benefit from industry's having outsourced the credentialing function to them.

The university's situation resembles the newspaper's in other ways, too. We offer a similar defense of why the world needs us: in addition to creating knowledge, we sort it, we package it for presentation, and we validate its authenticity and authority. If students start educating themselves using resources freely or cheaply available outside the university, how will we know that they are learning the right stuff? Don't get most academics started on the topic of for-profits like Kaplan University and the University of Phoenix; they are the university's whipping boy. The news industry has one, too: bloggers.

Newspaper publishers talk a lot these days about requiring readers to pay for content. In a certain sense, that is what students do: pay universities for content. Now, though, the web gives everyone access to on-line lectures, open-source lecture notes, the full text of books, technical articles, and ... the list goes on. Why should they pay?

Too many publishers argue that their content is better, more professional, and so stand behind "the reasonable idea that people should have to pay for the professionally produced content they consume". Shirky calls this a "post-rational demand", one that asks readers to behave in a way "intended to restore media companies to the profitability ordained to them by God Almighty" -- despite living in a world where such behaviors are as foreign as living in log cabins and riding horses for transportation. Is the university's self-justification as irrational? Is it becoming more irrational every year?

Some newspapers decide to charge for content as a way to prop up their traditional revenue stream, print subscriptions. Evidence suggest that this not only doesn't work (people inclined to drop their print subscriptions won't be deterred by pay walls) but that it is counter-productive: the loss of on-line visitors causes a decline in web advertising revenue that is much greater than the on-line reader revenue earned. Again, this is pure speculation, but I suspect that if universities try to charge for their on-line content they will see similar results.

The right reason to charge for on-line content is to create a new revenue stream, one that couldn't exist in the realm of print. This is where creative thinking will help to build an economically viable "new media". This is likely the right path for universities, too. My oldest but often most creative-thinking colleague has been suggesting this as a path for my school to consider for a few years. My department is working on one niche offering now: on-line courses aimed at a specific audience that might well take them elsewhere if we don't offer them, and who then have a smoother transition into full university admission later. We have other possibilities in mind, in particular as part of a graduate program that already attracts a large number of people who work full time in other cities.

But then again, there are schools like Harvard, MIT, and Stanford with open course initiatives, placing material on-line for free. How can a mid-sized, non-research public university compete with that content, in that market? How will such schools even maintain their traditional revenue streams if costs continue to rise and high quality on-line material is readily available?

In a middle of a revolution, no one knows the right answers, and there is great value in trying different ideas. Most any school can start with the obvious: lectures on-line, increased use of collaboration tools such as wikis and chats and blogs -- and Twitter and Facebook, and whatever comes next. These tools help us to connect with students, to make knowledge real, to participate in the learning. Some of the obvious paths may be part of the solution. Perhaps all of them are wrong. But as Shirky and others tell us, we need to try all sorts of experiments until we find the right solution. We are not likely to find it by looking at what we have always done. The rules are changing. The reactions of many in the academy tell a sad story. They are dismissive, or simply disinterested. That sounds a lot like the newspapers, too. Maybe people are simply scared and so hole up in the bunker constructed out of comfortable experience.

Like newspapers, some institutions of higher education are positioned to survive a revolution. Small, focused liberal arts colleges and technical universities cater to specific audiences with specific curricula. Of course, the "unique nationals" (schools such as Harvard, MIT, and Stanford) and public research universities with national brands (schools such as Cal-Berkeley and Michigan) sit well. Other research schools do, too, because their mission goes beyond the teaching of undergraduates. Then again, many of those schools are built on an economic model that some academics think is untenable in the long run. (I wrote about that article last month, in another context.)

The schools most in danger are the middle tier of so-called teaching universities and low-grade research schools. How will they compete with the surviving traditional powers or the wealth of information and knowledge available on-line? This is one reason I embrace our president's goal of going from good to great -- focusing our major efforts on a few things that we do really well, perhaps better than anyone, nurturing those areas with resources and attention, and then building our institution's mission and strategy around this powerful core. There is no guarantee that this approach will succeed, but it is perhaps the only path that offers a reasonable chance to schools like ours. We do have one competitive advantage over many of our competitors: enough research and size to offer students a rich learning environment and a wide range of courses of study, but small enough to offer a personal touch otherwise available only at much smaller schools. This is the same major asset that schools like us have always had. When we find ourselves competing in a new arena and under different conditions, this asset must manifest itself in new forms -- but it must remain the core around which we build..

One of the collateral industries built around universities, textbook publishing, has been facing this problem in much the same way as newspapers for a while now. The web created a marketplace with less friction, which has made it harder for them to make the return on investment to which they had grown accustomed. As textbook prices rise, students look for alternatives. Of course, students always have: using old editions, using library copies, sharing. Those are the old strategies -- I used them in school. But today's students have more options. They can buy from overseas dealers. They can make low-cost copies much more readily. Many of my students have begun to bypass the the assigned texts altogether and rely on free sources available on-line. Compassionate faculty look for ways to help students, too. They support old editions. They post lecture notes and course materials on-line. They even write their own textbooks and post them on-line. Here the textbook publishers cross paths with the newspapers. The web reduces entry costs to the point that almost anyone can enter and compete. And publishers shouldn't kid themselves; some of these on-line texts are really good books.

When I think about the case of computer science in particular, I really wonder. I see the wealth of wonderful information available on line. Free textbooks. Whole courses taught or recorded. Yes, blogs. Open-source software communities. User communities built around specific technologies. Academics and practitioners writing marvelous material and giving it away. I wonder, as many do about journalists, whether academics will be able to continue in this way if the university structure on which they build their careers changes or disappears? What experiments will find the successful models of tomorrow's schools?

Were I graduating from high school today, would I need a university education to prepare for a career in the software industry? Sure, most self-educated students would have gaps in their learning, but don't today's university graduates? And are the gaps in the self-educated's preparation as costly as 4+ years paying tuition and taking out loans? What if I worked the same 12, 14, or 16 hours a day (or more) reading, studying, writing, contributing to an open-source project, interacting on-line? Would I be able to marshall the initiative or discipline necessary to do this?

In my time teaching, I have encountered a few students capable of doing this, if they had wanted or needed to. A couple have gone to school and mostly gotten by that way anyway, working on the side, developing careers or their own start-up companies. Their real focus was on their own education, not on the details of any course we set before them.

Don't get me wrong. I believe in the mission of my school and of universities more generally. I believe that there is value in an on-campus experience, an immersion in a community constructed for the primary purpose of exploring ideas, learning and doing together. When else will students have an opportunity to focus full-time on learning across the spectrum of human knowledge, growing as a person and as a future professional? This is probably the best of what we offer: a learning community, focused on ideas broad and deep. We have research labs, teams competing in a cyberdefense and programming contests. The whole is greater than the sum of parts, both in the major and in liberal education.

But for how many students is this the college experience now, even when they live on campus? For many the focus is not on learning but on drinking, social life, video games... That's long been the case to some extent, but the economic model is changing. Is it cost-effective for today's students, who sometimes find themselves working 30 or more hours a week to pay for tuition and lifestyle, trying to take a full load of classes at the same time? How do we make the great value of a university education attractive in a new world? How do we make it a value?

And how long will universities be uniquely positioned to offer this value? Newspapers used to be uniquely positioned to offer a value no one else could. That has changed, and most in the industry didn't see it coming (or did, and averted their eyes rather than face the brutal facts).

I'd like also to say that expertise distinguishes the university from its on-line competition. That has been true in the past and remains true today, for the most part. But in a discipline like computer science, with a large professional component attracts most of its students, where grads will enter software development or networking... there is an awesome amount of expertise out in the world. More and more of those talented people are now sharing what they know on-line.

There is good news. Some people still believe in the value of a university education. Many students, and especially their parents, still believe. During the summer we do freshman orientation twice a week, with an occasional transfer student orientation thrown into the mix. People come to us eagerly, willing to spend out of their want or to take on massive debts to buy what we sell. Some come for jobs, but most still have at least a little of the idealism of education. When I think about their act in light of all that is going on in the world, I am humbled. We owe them something as valuable as what they surrender. We owe them an experience befitting the ideal. This humbles me, but it also Invigorates and scares me, too.

This article is probably more dark fantasy than reality. Still, I wonder how much of what I believe I really should believe, because it's right, and how much is merely a product of my lack of imagination. I am certain that I'm living in the middle of a revolution. I don't know how well I see or understand it. I am also certain of this: I don't want someone to be writing this speech about universities in a few years with me in its clueless intended audience.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 05, 2009 3:25 PM

Paying for Value or Paying for Time

Brian Marick tweeted about his mini-blog post Pay me until you're done, which got me to thinking. The idea is something like this: Many agile consultants work in an agile way, attacking the highest-value issue they can in a given situation. If the value of the issues to work on decreases with time, there will come a point at which the consultant's weekly stipend exceeds the value of the work he is doing. Maybe the client should stop buying services at that point.

My first thought was, "Yes, but." (I am far too prone to that!)

First, the "yes": In the general case of consulting, as opposed to contract work, the consultant's run will end as his marginal effect on the company approaches 0. Marick is being honest about his value. At some point, the value of his marginal contribution will fall below the price he is charging that week. Why not have the client end the arrangement at that point, or at least have the option to? This is a nice twist on our usual thinking.

Now for the "but". As I tweeted back this feels a bit like Zeno's Paradox. Marick the consultant covers not half the distance from start to finish each week, but the most valuable piece of ground remaining. With each week, he covers increasingly less valuable distance. So our consultant, cast in the role of Achilles, concedes the race and says, okay, so stop paying me.

This sounds noble, but remember: Achilles would win the race. We unwind Zeno's Paradox when we realize that the sum of an infinite series can be a finite number -- and that number may be just small enough for Achilles to catch the tortoise. This works only for infinite series that behave in a particular way.

Crazy, I know, but this is how the qualification of the "yes" arose in my mind. Maybe, the consultant helps to create a change in his client that changes the nature of the series of tasks he is working on. New ideas might create new or qualitatively different tasks to do. The change created may change the value of an existing task, or reorder the priorities of the remaining tasks. If the nature of the series changes, it may cause the value of the series to change, too. If so, then the client may well want to keep the consultant around, but doing something different than the original set of issues would have called for.

Another thought: Assume that the conditions that Marick described do hold. Should the compensation model be revised? He seems to be assuming that the consultant charges the same amount for each week of work, with the value of the tasks performed early being greater than that amount and the value of the tasks performed later being less than that amount. If that is true,then early on the consultant is bringing in substantially more value than he costs. If the client pulls the plug as soon as the value proposition turns in its favor, then the consultant ends up receiving less than the original contract called for yet providing more than average value for the time period. If the consultant thinks that is fair, great. What if not? Perhaps the consultant should charge more in the early weeks, when he is providing more value, than in later week? Or maybe the client could pay a fee to "buy out" the rest of the contract? (I'm not a professional consultant, so take that into account when evaluating my ideas about consultant compensation...)

And another thought: Does this apply to what happens when a professor teaches a class? In a way, I think it does. When I introduce a new area to students, it may well be the case that the biggest return on the time we spend (and the biggest bang for the students' tuition dollars) happens in the first weeks. If the course is successful, then most students will become increasingly self-sufficient in the area as the semester goes on. This is more likely the case for upper-division courses than for freshmen. What would it be like for a student to decide to opt out of the course at the point where she feels like she has stopped receiving fair value for the time being spent? Learning isn't the same as a business transaction, but this does have an appealing feel to it.

The university model for courses doesn't support Marick's opt-out well. The best students in a course often reach a point where they are self-sufficient or nearly so, and they are "stuck". The "but" in our teaching model is that we teach an audience larger than one, and the students can be at quite different levels in background and understanding. Only the best students reach a point where opting out would make sense; the rest need more (and a few need a lot more -- more than one semester can offer!).

The good news is that the unevenness imposed by our course model doesn't hurt most of those best students. They are usually the ones who are able to make value out of their time in the class and with the professor regardless of what is happening in the classroom. They not only survive the latency, but thrive by veering off in their own direction, asking good questions and doing their own programming, reading, thinking outside of class. This way of thinking about the learning "transaction" of a course may help to explain another class of students. We all know students who are quite bright but end up struggling through academic courses and programs. Perhaps these students, despite their intelligence and aptitude for the discipline, don't have the skills or aptitude to make value out of the latency between the point they stop receiving net value and the end of the course. This inability creates a problem for them (among them, boredom and low grades). Some instructors are better able to recognize this situation and address it through one-on-one engagement. Some would like to help but are in a context that limits them. It's hard to find time for a lot of one-on-one instruction when you teach three large sections and are trying to do research and are expected to meet all of the other expectations of a university prof.

Sorry for the digression from Marick's thought experiment, which is intriguing in its own setting. But I have learned a lot from applying agile development ideas to my running. I have found places where the new twist helps me and others where the analogy fails. I'm can't shake the urge to do the same on occasion with how we teach.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Software Development, Teaching and Learning

May 28, 2009 10:02 PM

Developing Instinct

One of the challenges every beginner faces is learning the subtle judgments they must make. How much time will it take for us to implement this story? Should I create a new kind of object here? Estimation and other judgments are a challenge because the beginner lacks the "instinct" for making them, and the environment often does provide enough data to make a clear-cut decision.

I've been living with such a beginner's mind this week on my morning runs. Tuesday morning I started with a feeling of torpor and was sure I'd end with a slow time. When I finished, I was surprised to have run an average time. On Wednesday morning, I felt good yet came in with one of my slowest times for the distance ever. This morning, my legs were stiff, making steps seem a chore. I finished in one of my better times at this distance since working my mileage back up.

My inaccurate judgments flow out of bad instincts. Sometimes, legs feel slow and steps a challenge because I am pushing. Sometimes, I run with ease because I'm not running very hard at all!

At this stage in my running, bad instincts are not a major problem. I'm mostly just trying to run enough miles to build my aerobic base. Guessing my pace wrong has little tangible effect. It's mostly just frustrating not to know. Occasionally, though, the result is as bad as the judgment. Last week, I ran too fast on Wednesday after running faster than planned on Tuesday. I ended up sick for the rest of the week and missed out on 8-10 miles I need to build my base. Other times, the result goes the other way, as when I turned in a best-case scenario half-marathon in Indianapolis. Who knew? Certainly not me.

So, inaccurate instincts can give good or bad results. The common factor is unpredictability. That may be okay when running, or not; in any case, it can be a part of the continuous change I seek. But unpredictability in process is not so okay when I am developing software. Continuous learning is still good, but being wrong can wreak havoc on a timeline, and it can cause problems for your customer.

Bad instincts when estimating my pace wasn't a problem two years, though it has been in my deeper past. When I started running, I often felt like an outsider. Runners knew things that I didn't, which made me feel like a pretender. They had instincts about training, eating, racing, and resting that I lacked. But over time I began to feel like I knew more, and soon -- imperceptibly I began to feel like a runner after all. A lot -- all? -- of what we call "instinct" is developed, not inborn. Practice, repetition, time -- they added up to my instincts as a runner.

Time can also erode instinct. A lack of practice, a lack of repetition, and now I am back to where I was several years ago, instinct-wise. This is, I think, a big part of what makes learning to run again uncomfortable, much as beginners are uncomfortable learning the first time.

One of the things I like about agile approaches to software development is their emphasis on the conscious attention to practice. They encourage us to reflect about our practice and look for ways to improve that are supported by experience. The practices we focus on help us to develop good instincts: how much time it will take for us to implement a story, when to write -- and not write -- tests, how far to refactor a system to prepare for the next story. Developing accurate and effective instinct is one way we get better, and that is more important than being agile.

The traditional software engineering community thinks about this challenge, too. Watts Humphrey created the Personal Software Process to help developers get a better sense of how they use their time and to use this sense to get better. But, typically, the result feels so heavy, so onerous on the developers it aims to help, that few people are likely to stick with it when they get into the trenches with their code.

An aside: This reminds me of conversations I had with students in my AI courses back in the day. I asked them to read Alan Turing's classic Computing Machinery and Intelligence, and in class we discussed the Turing Test and the many objections Turing rebuts. Many students clung to the notion that a computer program could never exhibit human-like intelligence because humans lacked "gut instinct" -- instinct. Many students played right into Turing's rebuttal yet remained firm; they felt deeply that to be human was different. Now, I am not at ease with scientific materialism's claim that humans are purely deterministic beings, but the scientist in me tells me to strive for natural explanations of as much of every phenomenon as possible. Why couldn't a program develop a "gut feeling"? To the extent that at least some of our instincts are learned responses, developed through repetition and time, why couldn't a program learn the same instincts? I had fun playing devil's advocate, as I always do, even when I was certain that I was making little progress in opening some students' minds.

In your work and in your play, be aware of the role that practice, repetition, and time play in developing your instincts. Do not despair that you don't have good instincts. Work to develop them. The word missing from your thought is "yet". A little attention to your work, and a lot of practice, will go a long way. Once you have good instincts, cherish them. They give us comfort and confidence. They make us feel powerful. But don't settle. The same attention and practice will help you get better, to grow as a developer or runner or whatever your task.

As for my running, I am certainly glad to be getting stronger and to be able to run faster than I expect. Still, I look forward to the feeling of control I have when my instincts are more reliable. Unpredictable effort leads to unpredictable days.


Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

May 22, 2009 4:05 PM

Parsing Expression Grammars in the Compiler Course

Yesterday, a student told me about the Ruby gem Treetop, a DSL for writing language grammars. This language uses parsing expression grammar, which turns our usual idea of grammar inside out. Most compiler theory is built atop the context-free and regular grammars of Chomsky. These grammars are generative: they describe rules that allow us to create strings which are part of the language. Parsing expression grammars describe rules that allow us to recognize strings which are part of the language.

This new kind of grammar offers a lot of advantages for working with programming languages, such as unifying lexical and syntactic descriptions and supporting the construction of linear-time parsers. I remember seeing Bryan Ford talk about packrat parsing at ICFP 2002, but at that point I wasn't thinking as much about language grammars and so didn't pay close attention the type of grammar that underlay his parsing ideas.

While generative grammars are a fundamental part of computing theory, they don't map directly onto the primary task for which many software people use them: building scanners and parsers for programming languages. Our programs recognize strings, not generate them. So we have developed mechanisms for building and even generating scanners and parsers, given grammars that we have written under specific constraints and then massaged to fit our programming mechanisms. Sometimes the modified grammars aren't as straightforward as we might like. This can be a problem for anyone who comes to the grammar later, as well as a problem for the creators of the grammar when they want to change it in response to changes requested by users.

A recognition-based grammar matches our goals as compiler writers more closely, which could be a nice advantage. Parsing expression grammars make explicit the specification of the code we write against them.

For those of us who teach compiler courses, something like a parsing expression grammar raises another question. Oftentimes, we hope that the compiler course can do double duty: teach students how to build a compiler, and help them to understand the theory, history, and big issues of language processors. I think of this as a battle between two forces, "learning to" versus "learning about", a manifestation of epistemology's distinction between "knowing that" and "knowing how".

Using recognition-based grammars as the foundation for a compiler course introduces a trade-off: students may be empowered more quickly to create language grammars and parsers but perhaps not learn as much about the standard terminology and techniques of the discipline. These standard ways are, of course, our historical ways of doing things. There is much value in learning history, but at what point do we take the step forward to techniques that are more practical than reminiscent?

This is a choice that we have to make all the time in a compiler course: top-down versus bottom-up parsing, table-driven parsers versus recursive-descent parsers, writing parsers by hand versus using parser generators... As I've discussed here before, I still ask students to write their parser by hand because I think the experience of writing this code teaches them more than just about compilers.

Now that I have been re-introduced to this notion of recognition-based grammars, I'm wondering whether they might help me to balance some of the forces at play more satisfactorily. Students would have the experience of writing a non-trivial parser by hand, but against a grammar that is more transparent and easier to work with. I will play with parsing expression grammars a bit in the next year or so and consider making a change the next time I teach the course. (If you have taught a compiler course using this approach, or know someone who has, please let me know.)

Going this way would not commit me to having students write their parsers by hand. The link that started this thread of thought points to a tool for automating the manipulation of parsing expression grammars. Whatever I do, I'll add that tool to the list of tools I share with students.

Oh, and a little Ruby Love to close. Take a look at TreeTop. Its syntax is beautiful. A Treetop grammar reads cleanly, crisply -- and is executable Ruby code. This is the sort of beauty that Ruby allows, even encourages, and is one of the reasons I remain enamored of the language.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 20, 2009 4:26 PM

Bright Lines in Learning and Doing

Sometimes it pays to keep reading. Last time, I commented on breaking rules and mentioned a thread on the XP mailing list. I figured that I had seen all I needed there and was on the verge of skipping the rest. Then I saw a message from Laurent Bossavit and decided to read. I'm not surprised to learn something from Laurent; I have learned from him before.

Laurent's note introduced me to the legal term bright line. In the law, a bright-line rule is...

... a clearly defined rule or standard, composed of objective factors, which leaves little or no room for varying interpretation. The purpose of a bright-line rule is to produce predictable and consistent results in its application.

As Laurent says, Bright lines are important in situations where temptations are strong and the slope particularly steep, a well-known example is alcoholics' high vulnerability to even small exceptions. Test-driven development, or even writing tests soon after code and thus maintaining a complete suite of automated tests, requires a bright line for many developers. It's too easy to slide back into old habits, which for most developers are much older and stronger. Staying on the right side of the line may be the only practical way to Live Right.

This provides a useful name for what teachers often do in class: create bright lines for students. When students are first learning a new concept, they need to develop a new habit. A bright-line rule -- "Thou shalt always write a test first." or "Thou shalt write no line of code outside of a pair." -- removes from the students' minds the need to make a judgment that they are almost always not prepared to make yet: "Is this case an exception?" While learning, it's often better to play Three Bears and overdo it. This gives your mind a chance to develop good judgment through experience.

(For some reason, I am reminded of one way that I used to learn to play a new chess opening. I'd play a bazillion games of speed chess using it. This didn't train my mind to think deeply about the positions the opening created, but it gave me a bazillion repetitions. I soon learned a lot of patterns that allowed me to dismiss many bad alternatives and focus my attention on the more interesting positions.)

I often ask students to start with a bright line, and only later take on the challenge of a balancing test. It's better to evolve toward such complexity, not try to start there.

The psychological benefits of a bright-line test are not limited to beginners. Just as alcoholics have to hold a hard line and consider every choice consciously every day, some of us need a good "Thou shalt.." or "Thou shalt not..." in certain cases. As much as I like to run, I sometimes have to force myself out of bed at 5:00 AM or earlier to do my morning work-out. Why not just skip one? I am a creature of habit, and skipping even one day makes it even harder to get up the next, and the difficulty grows until I have a new habit.

(This has been one of the most challenging parts of trying to get back up to my old mileage after several extended breaks last year. I am proud finally to have done all five of my morning runs last week -- no days off, no PM make-ups. A new habit is in formation.)

If you know you have a particular weakness, draw a bright line for yourself. There is no shame in that; indeed, I'd say that it shows professional maturity to recognize the need and address it. If you need a bright line for everything, that may be a problem...

Sometimes, I adopt a bright line for myself because I want everyone on the team to follow a practice. I may feel comfortable exercising judgment in the gray area but not feel the rest of the team is ready. So we all play by the rules rather than discuss every possible judgment call. As the team develops, we can begin having those discussions. This is similar to how I teach many practices.

This may sound too controlling to you, and occasionally a student will say as much. But nearly everyone in class benefits from taking the more patient road to expertise. Again, from Laurent:

Rules which are more ambiguous and subtle leave more room for various fudge factors, and that of course can turn into an encouragement to fudge, the top of a slippery slope.

Once learners have formed their judgment, they are ready to balance forces. Until then, most are more likely to backslide out of habit than to make an appropriate choice to break the rule. And time spent arguing every case before they are ready is time not spent learning.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 18, 2009 8:58 PM

Practice and Dogma in Testing

Shh.

I have a secret.

When I am writing a program, I will on occasion add a new piece of functionality without writing a test.

I am whispering because I have seen the reaction on the XP mailing list and on a number of blogs that Kent Beck received to his recent article, To Test or Not to Test? That's a Good Question. In this short piece, Kent describes his current thinking that, like golf, software development may have "long game" and "short game", which call for different tools and especially mentalities. One of the differences might be whether one is willing to trade automated testing for some other value, such as delivering a piece of software sooner.

Note that Kent did not say that in the long game he chooses not to test his code; he simply tested manually. He also didn't say that he plans never to write the automated tests he needs later; he said he would write them later, either when he has more time or, perhaps, when he has learned enough to turn 8 hours of writing a test into something much shorter.

Many peoples' public reactions to Kent's admission have been along these lines: "We test you to make this decision, Kent, but we don't trust everyone else. And by saying this is okay, you will contribute to the delinquency of many programmers." Now you know why I need to whisper... I am certainly not in the handful of programmers so good that these folks would be willing to excuse my apostasy. Kent himself is taking a lot of abuse for it.

I have to admit that Kent's argument doesn't seem that big a deal to me. I may not agree with everything he says in his article, but at its core he is claiming only that there is a particular context in which programmers might choose to use their judgment and not write tests before or immediately after writing some code. Shocking: A programmer should use his or her judgment in the course of acting professionally. Where is the surprise?

One of the things I like about Kent's piece is that he helps us to think about when it might be useful to break a particular rule. I know that I'll be breaking rules occasionally, but I often worry that I am surrendering to laziness or sloppiness. Kent is describing a candidate pattern: In this context, with these goals, you are justified in breaking this rule consciously. We are balancing forces, as we do all the time when building anything. We might disagree with the pattern he proposes, but I don't understand why developers would attack the very notion of making a trade-off that results in breaking a rule.

In practice, I often play a little loose with the rules of XP. There are a variety of reasons that lead me to do so. Sometimes I pay for not writing a test, and when I do I reflect on what about the situation made the omission so dangerous. If the only answer I can offer is "You must write the test, always.", then I worry that I have moved from behaving like a professional to behaving like a zealot. I suspect that a lot of developers make similar trade-offs.

I do appreciate the difficulty this raises for those of us who teach XP, whether at universities or in industry. If we teach a set of principles as valuable, what happens to our students' confidence in the principles when we admit that we don't follow the rules slavishly? Well, I hope that my students are learning to think, and that they realize any principle or rule is subject to our professional judgment in any given circumstance.

Of course, in the context of a course, I often ask students to follow the rules "slavishly", especially when the principles in question require a substantial change in how they think and behave. TDD is an example, as is pair programming. More broadly, this idea applies when we teach OOP or functional programming or any other new practice. (No assignment statements or sequences until Week 10 of Programming Languages!) Often, the best way to learn a new practice is to live it for a while. You understand it better then than you can from any description, especially how it can transform the way you think. You can use this understanding later when it comes to apply your judgment about potential trade-offs.

Even still, I know that, no matter how much an instructor encourages a new practice and strives to get students to live inside it for a while, some students simply won't do it. Some want to but struggle changing their habits. I feel for them. Others willfully choose not to try the something new and deny themselves the opportunity to grow. I feel for them, too, but in a different way.

Once students have had a chance to learn a set of principles and to practice them for a while, I love to talk with them about choices, judgment, and trade-offs. They are capable of having a meaningful discussion then.

It's important to remember that Kent is not teaching novices. His primary audience is professional programmers, with whom he ought to be able to have a coherent conversation about choices, judgment, and trade-offs. Fortunately, a few folks on the P list have entertained the "long game versus short game" claim and related their own experiences making these kind of decisions on a daily basis.

If we in the agile world rely on unthinking adherence to rules, then we are guilty of proselytizing, not educating. Lots of folks who don't buy the agile approaches love when they see examples of this rigidity. It gives them evidence to support their tenuous position about the whole community. From all of my full-time years in the classroom, I have learned that perhaps the most valuable asset I can possess is my students' trust in my goals and attitudes. Without that, little I do is likely to have any positive effect on them.

Kent's article has brought to the surfaced another choice agilistas face most every day: the choice between dogma and judgment. We tend to lose people when we opt for unthinking adherence to a rule or a practice. Besides, dogmatic adherence is rarely the best path to getting better every day at what we do, which is, I think one of the principles that motivate the agile methods.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 12, 2009 11:27 AM

Lessons from Compilers Course Experiment

Though final grades are not all yet submitted, the semester is over. We made adjustments to the specification in my compilers course, and the students were able to produce a compiler that produced compilable, executable Java code for a variety of source programs. For the most part, the issues we discussed most at their finals week demo dealt with quality control. We found some programs that confounded their parser or code generator, which were evidence of bits of complexity they had not yet mastered. There is a lesson to be learned: theory and testing often take a back seat to team dynamics and practices. Given the complexity of their source language, I was not too disappointed with their software, though I think this team fell short of its promise. I have been part of teams that have fallen similarly short and so can empathize.

So, what is the verdict on a couple of new ideas we tried this semester: letting the team design its own source language and working in a large team, of six? After their demo, we debriefed the project as a group, and then I asked them to evaluate the project and course in writing. So I have some student data on which to draw as well as my own thoughts.

On designing their own language: yes, but. Most everyone enjoyed that part of the project, and for some it was their favorite activity. But the students found themselves still churning on syntax and semantics relatively late into the project, which affected the quality and stability of their parser. We left open the possibility of small changes to the grammar as they learned more about the language by implementing it, but this element of reality complicated their jobs. I did not lock down the language soon enough and left them making decisions too late in the process.

One thing I can do the next time we try this is to put a firmer deadline on language design. One thing thing that the students and I both found helpful was writing programs in the proposed language and discussing syntactic and semantic language issues grounded in real code. I think I'll build a session or two of this into the course early, before the drop-dead date for the grammar, so that we can all learn as much as we can about the language before we proceed on to implementing it.

We also discussed the idea of developing the compiler in a more agile way, implementing beginning-to-end programs for increasing subsets of the language features until we are done. This may well help us get better feedback about language design earlier, but I'm not sure that it addresses the big risks inherent in letting the students design their own language. I'll have to think more on this.

On working is a team of size six: no. The team members and I were unanimous that a team of size six created more problems than it solved. My original thinking was that a larger team would be better equipped to do the extra work introduced by designing their own language, which almost necessarily delayed the start of the compiler implementation. But I think we were bitten by a preemptive variation of Brooks's Law -- more manpower slowed them down. Communication overhead goes up pretty quickly when you move from a team of three to a team of six, and it was much harder for the team to handle all of its members' ideas effectively. This might well be true for a team of experienced developers, but for a team of undergrads working on their first collaborative project of this scale, it was an occasional show-stopper. I'll know better next time.

As an aside, one feature the students included in the language they designed was first-class functions. This clearly complicated their syntax and their implementation. I was pleased that they took the shot. Even after the project was over and they realized just how much extra work first-class functions turned out to be, the team was nearly unanimous in saying that, if they could start over, they would retain that feature. I admire their spunk and their understanding of the programming power this feature gave to their language.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 06, 2009 4:16 PM

Making Language

I've been catching up on some reading while not making progress on other work. I enjoyed this interview with Barbara Liskov, which discusses some of the work that earned her the 2008 Turing Award. I liked this passage:

I then developed a programming language that included this idea [of how to abstract away from the details of data representation in programs]. I did that for two reasons: one was to make sure that I had defined everything precisely because a programming language eventually turns into code that runs on a machine so it has to be very well-defined; and then additionally because programmers write programs in programming languages and so I thought it would be a good vehicle for communicating the idea so they would really understand it.

Liskov had two needs, and she designed a language to meet them. First, she needed to know that her idea for how to organize programs were sound. She wanted to hold herself accountable. A program is an effective way to implement an idea and show that it works as described. In her case, her idea was about _writing_ programs, so she created a new language that embodied the idea and wrote a processor for programs written in that language.

Second, she needed to share her idea with others. She wanted to teach programmers to use her idea effectively. To do that, she created a language. It embodied her ideas about encapsulation and abstraction in language primitives that programmers could use directly. This made it possible for them to learn how to think in their terms and thus produce a new kind of program.

This is a great example of what language can do, and why having the power to create new languages makes computer science different. A program is an idea and a language is a vehicle for expressing ideas. We are only beginning to understand what this means for how we can learn and communicate. In the video Education in the Digital Age, Alan Kay talks about how creating a new language changes how we learn:

The computer allows us to put what we are thinking into a dynamic language and probe it in a way we never could before.

We need to find a way to help CS students see this early on so that they become comfortable with the idea of creating languages to help themselves learn. Mark Guzdial recently said much the same thing: we must help students see that languages are things you build, not just use. Can we introduce students to this idea in their introductory courses? Certainly, under the right conditions. One of my colleagues loves to use small BASIC-like interpreters in his intro course or his assembly language courses. This used to be a common idea, but as curricula and introductory programming languages have changed over time, it seems to have fallen out of favor. Some folks persist, perhaps with simple a simple command language. But we need to reinforce the idea throughout the curriculum. This is less a matter of course content than the mindset of the instructor.

After reading so much recently about Liskov, I am eager to spend some time studying CLU. I heard of CLU as an undergraduate but never had a chance for in-depth study. Even with so many new languages to dive into, I still have an affinity for older languages and and for original literature on many CS topics. (If I were in the humanities, I would probably be a classicist, not a scholar of modern lit or pop culture...)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 05, 2009 3:37 PM

Problem-Based Universities and Other Radical Changes

Last week, every administrator at my university seemed to be talking about Mark Taylor's End the University as We Know It. Like many other universities, we have been examining pretty much everything we do in light of significant changes in the economy (including, for public universities, drastic reductions in state funding) and demographics. Taylor, a department chair in a humanities department at Columbia University, starts with a critique of graduate programs, which he contends produce a product few need because they also provide an essential commodity for the modern university: cheap teaching labor that frees faculty to do research. From here, he asserts that American higher education should undergo a radical transformation and proposes six steps in the direction he thinks best.

The second proposal caught my attention:

Abolish permanent departments, even for undergraduate education, and create problem-focused programs.

In recent years I have developed a strong belief in the value of project-based education, especially in CS courses. I also have a fondness for the idea of problem-based learning which Owen Astrachan has been touting for some time now. I think of these as having at least one valuable attribute in common: students do something real in context where they have to make real decisions.

Taylor proposes that we build the university of today not around permanent discipline-specific departments but evolving programs centered on "zones of inquiry", Big Problems that matter across disciplines. When I discussed this idea with my provost, I told him that I was fortunate: my discipline, Computer Science, will have a role to play in most every problem-focused program for the foreseeable future. Talk about job security!

This idea may sound wonderful, but there are risks. As Michael Mitzenmacher points out, you still need discipline-specific expertise. Taylor's offers as an example a program built around pressing issues related to water, which would ...

... bring together people in the humanities, arts, social and natural sciences with representatives from professional schools like medicine, law, business, engineering, social work, theology and architecture.

To do that, you need people with expertise in the humanities, arts, social and natural sciences, medicine, law, business, engineering, social work, theology and architecture. Where will these experts come from? Taylor doesn't say as much, but there is a hint in his article, and in others like it, that collaborative work on big problems is all that we need. But I wonder how such a university would prepare students who would become the next generation of experts in the arts, sciences, and professions, let alone the next generation of researchers who will discover the ideas and create the tools we need to solve the big problems. I don't suppose there is any reason in principle that a university in such way could not prepare experts, and my Inner Devil's Advocate is already working on ways that it might succeed swimmingly. But I get a little worried whenever I hear people talking about making radical changes to complex systems without having considered explicitly all of the story.

This particular risk is at the front of my mind because we face the same risk, at a different scale, when we create problem- and project-focused curricula. There is a natural tension between depth in the discipline and working in the context of a specific application or other domain. If I build an intro programming course around, say, media computation or biology, will students learn all of the CS they should learn in that course? Some time will be spent on images and sounds, or on DNA and amino acid base pairs, and that is time not spent on procedures, arrays, pointers, and big-O analysis. I am well aware that adding more content to a course does not mean that everyone will get it all. But some do, and maybe those are the people who will be the experts of the future?

We have used media computation as a theme in a few sections of our intro course over the last four years, and we observe this tension in the results. Some students get just what we want them to get out of the theme: motivation to dig deep, experiment, and discuss important ideas. Others don't connect with the theme, and they just end up knowing less CS-specific content. The prof who has taught this course most often is beginning to see ways in which he can trade back some of the context for opportunities to program in other contexts, which may hit a broader variety of students than a pure media comp course.

When I think about how this trade-off would scale to the level of an entire university in programs that bring together eight, twelve, or twenty disciplines, I realize that we would need to think carefully before proceeding too far. Perhaps Taylor hopes that his article will cause faculty and administration to begin the process of thinking carefully. Some people have been thinking about these issues for a while and even put some of those thoughts into writing (PDF).

The law of unintended consequences lurks in the darkness behind many suggestions of radical changes to complex systems. For example, Taylor suggests that universities abolish tenure. Many would agree. After being a department head for many years, I appreciate many of the advantages of this idea. But consider what might happen in disciplines whose faculty are in great demand in industry. Hmm, such as computer science. Without tenure and its concomitant security, I suspect that a fair number of CS faculty would find their way into industry. Right now, the allure of bigger paydays in industry are balanced against all sorts of risk. Universities offer a level of security in exchange for much lower salaries. Without that security, I might be better off out there in the real world writing programs, hoping that one turns out to be the next Twitter or the IDE that revolutionizes how we program in dynamic languages.

I am not suggesting that we not think radical thoughts or consider how we might do things differently. In fact, I spend a large part of my administrative and academic lives doing just that. I do suggest that we not rush headlong into ideas before thinking them through, even when they seem tantalizingly right at first blush.

As with so many articles of this sort, Taylor may hope simply to cause people to consider new ideas, not adopt the specific prescriptions he offers. Certainly, many schools are already experimenting with ideas such as greater collaboration across disciplines and institutions. As in the case of courses that are focused on problems or projects, the rub is in balancing the forces at play so that we achieve our goal of better helping students to learn.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 23, 2009 8:51 PM

Getting Caught Up In Stupid Details

I occasionally sneak a peek at the Learning Curves blog when I should be working. Yesterday I saw this entry, with a sad bullet point for us CS profs:

Keep getting caught up in stupid details on the computer science homework. String handling. Formatting times. That sort of thing. The problem no longer interests me now that I grasp the big idea.

This is an issue I see a lot in students, usually the better ones. In some cases, the problem is that the students feel they have a right not to be bothered with any details, stupid or otherwise. But a lot of programming involves stupid details. So do most other activities, like playing the piano, playing a sport, reading a book, closing a company's financial books, or running a chemistry experiment.

Life isn't a matter only of big ideas that never come into contact with the world. Our fingers have to strike the keys and play the right notes, in the correct order at the proper tempo. I can understand the big ideas of shooting a ball through a hoop, but players succeed because they shoot thousands of shots, over and over, paying careful attention to details such as their point of release, the rotation of the ball, and the bending of their knees.

There may be an element of this in Hirta's lament, but I do not imagine that this is her whole of her problem. Some details really are stupid. For the most part, basketball players need not worry about the lettering on the ball, and piano players need not think about whether their sheet music was printed on 80% or 90% post-consumer recycled paper. Yet too often people who write programs have to attend to details just as silly, irrelevant, and disruptive.

This problem is even worse for people learning to write programs. "Don't worry what public static void main( String[] args ) means; just type it in before you start." Huh? Java is not alone here. C++ throws all sorts of silly details into the faces of novice programmers, and even languages touted for their novice-friendliness, such as Ada, push all manner of syntax and convention into the minds of beginners. Let's face it: learning to program is hard enough. We don't need to distract learners with details that don't contribute to learning the big idea, and maybe even get in the way.

If we hope to excite people with the power of programming, we will do it with big ideas, not the placement of periods, spaces, keywords, and braces. We need to find ways so that students can solve problems and write programs by understanding the ideas behind them, using tools that get in the way as little as possible. No junk allowed. That may be through simpler languages, better libraries, or something else that I haven't learned about yet.

(And please don't post a link to this entry on Reddit with a comment saying that that silly Eugene fella thinks we should dumb down programming and programming languages by trying to eliminate all the details, and that this is impossible, and that Eugene's thinking it is possible is a sign that he is almost certainly ruining a bunch of poor students in the heartland. Re-read the first part of the entry first...)

Oh, and for my agile developer friends: Read a little farther down the Learning Curves post to find this:

Email from TA alleges that debugging will be faster if one writes all the test cases ahead of time because one won't have to keep typing things while testing by hand.

Hirta dismisses the idea, saying that debugging will still require diagnosis and judgment, and thus be particular to the program and to the bug in question. But I think her TA has re-discovered test-first programming. Standing ovation!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 17, 2009 8:20 PM

Slipping Schedules and Changing Scope in the Compiler Course

We have fallen behind in my compilers course. You may recall that before the semester I contemplated some changes in the course, including letting letting the students design their own language. My group of six chose that route, and as a part of that choice decided to work as a team of six, rather than in pairs or threes. This was the first time for me to have either of these situations in class, and I was curious to see how it would turn out.

Designing a language is tough, and even having lots of examples to work from, both languages and documents describing languages, is not enough to make it easy. We took a little longer than I expected. Actually, the team met its design deadline (with no time to spare, but...), but then followed a period of thinking more about the language. We both needed to come to a better understanding of the implications of some of their design decisions. Over time they changed their definition, sometimes refining and sometimes simply making the language different. This slowed the process of starting to implement the language and caused a few false starts in the scanner and parser.

Such bumps are a natural part of taking on the tougher problem of creating the language, so I don't mind that we are behind. I have learned a few things to do differently the next time a compiler class chooses this route. Working as a team of six increases the communication overhead they face, so I need to do a better job preparing them for the management component of such a large development project. It's hard for a team to manage itself, either through specific roles that include a nominal team leader or through town-hall style democracy. As the instructor, I need to watch for moments when the team needs me to take the rudder and guide things a bit more closely. Still, I think that this has been a valuable experience for the students. When they get out into industry, they will see successes and failures of the sort they've created for themselves this semester.

Still, things have gone reasonably well. It's almost inevitable that occasional disagreements about technical detail or team management will arise. People are people, and we are all hard to work with sometimes. But I've been happy with the attitude that all have brought to the project. I think all have shown a reasonable degree of commitment to the project, too, though they may not realize yet just what sort of commitment getting a big project like this done can require.

I have resisted the urge to tell (or re-tell?) the story of my senior team project: a self-selected team of good programmers and students who nonetheless found ways to fall way behind their development schedule. We had no way to change the scope of the system, struggled mightily in the last weeks of the two-term project, and watched our system crash on the day of the acceptance test. The average number of hours I spent on this project during its second term? 62 hours. And that was while taking another CS course, two accounting courses, and a numerical analysis course -- the final exam for which I have literally no recollection of at all, because by that time I was functioning on nearly zero sleep for days on end. This story probably makes me sound crazy -- not committed, but in need of being committed. Sometimes, that's what a project takes.

On the technical side, I will do more next time to accelerate our understanding of the new language and our fixing of the definition. One approach I'm considering is early homework assignments writing programs in the new language, even before we have a scanner or parser. This causes us all to get concrete sooner. Maybe I will offer extra-credit points to students who catch errors in the spec or in others students' programs. I'll definitely give extra-credit for catching errors in my programs. That's always fun, and I make a perfect foil for the class. I am sure both to make mistakes and to find holes in their design or their understanding of it.

But what about this semester? We are behind, with three weeks to D-Day. What is the best solution?

The first thing to recognize is that sometimes this sort of thing happens. I do not have the authority to implement a death march, short of becoming an ineffective martinet. While I could try telling students that they will receive incompletes until the project is finished, I don't really have the authority to push the deadline of the project beyond the end of our semester.

The better option is one not made available to my project team in school, but which we in the software world now recognize as an essential option: reduce the scope of the project. The team and I discussed this early in the week. We can't do much to make the language smaller, because it is already rather sparse in data types, primitive operators, and control structure. The one thing we could drop is higher-order procedures, but I was so please when they included this feature that I would feel bad watching it drop out now. But that would not really solve our problem. I am not sure they could complete a compiler for the rest of the language in time anyway.

We decided instead to change the target language from JVM bytecodes to Java itself. This simplifies what remains for them quite a bit, but not so much that it makes the job easy. The big things we lose are designing and implementing a low-level run-time system and emitting machine-level code. The flip side is that we decided to retain the language's support for higher-order procedures, which is not trivial to implement in generated Java code. They'll still get to think about and implement closures, perhaps using anonymous inner classes to implement function arguments and results.

This results in a different challenge, and a change in the experience the students will have. The object lesson is a good one. We have made a trade-off, and that is the nature of life for programmers. Change happens, and things don't always proceed according to plan. So we adapt and do the best we can. We might even spend 60 hours one week working on our project!

For me, the biggest effect of the change is on our last two and a half weeks of lecture. Given where we are and what they will be doing, what do they most need to learn? What ideas and techniques should they see even if they won't use them in their compilers? I get to have some fun right up to the end, too.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 14, 2009 7:49 PM

Posts of the Day

... for a definition of "the day" that includes when I read them, not when the authors posted them.

Tweet of the Day

Marick's Law: In software, anything of the form "X's Law" is better understood by replacing the word "Law" with "Fervent Desire".
-- Brian Marick

I love definitions that apply to themselves. They are the next best thing to recursion. I will have plenty of opportunities to put Brian's fervent desire into practice while preparing to teach software engineering this fall.

Non-Tech Blog of the Day

I don't usually quote former graffiti vandals or tattoo artists here. But I am an open-minded guy, and this says something that many people prefer not to hear. Courtesy of Michael Berman:

"Am I gifted or especially talented?" Cartoon said. "No. I got all this through hard work. Through respecting my old man. From taking direction from people. From painting when everyone else was asleep. I just found something I really love and practiced at it my whole life."
-- Mister Cartoon

Okay, so I am tickled to have quoted a guy named Mister Cartoon. His work isn't my style, but his attitude is. Work. Respect. Deference. Practice. Most days of the week, I would be well-served by setting aside my hubris and following Mister Cartoon's example.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 13, 2009 7:36 PM

Keeping Up Versus Settling Down

The last few days I have run across several pointers to Scala and Clojure, two dynamic languages that support functional programming style on the JVM. Whenever I run into a new programming language, I start thinking about how much time I should put into learning and using it. If it is a functional language, I think a bit harder, and naturally wonder whether I should consider migrating my Programming Languages course from the venerable but now 30-plus-years-old Scheme to the new language.

My time is finite and in scarce supply, so I have to choose wisely. If I try to chase every language thread that pops up everywhere, I'll end up completely lost and making no progress on anything important. Choosing which threads to follow requires good professional judgment and, frankly, a lot of luck. In the worst case, I'd like to learn something new for the time I invest.

Scala and Clojure have been on my radar for a while and, like books that receive multiple recommendations, are nearing critical mass for a deeper look. With summer around the corner and my usual itch to learn something new, chances go up even more.

Could one of these languages, or another, ever displace Scheme from my course? That's yet another major issue. A few years ago I entertained the notion of using Haskell in lieu of Scheme for a short while, but Scheme's simplicity and dynamic typing won out. Our students need to see something as different as possible from what they are used to, whatever burden that places on me and the course. My own experience with Lisp and Scheme surely had some effect on my decision. For every beautiful idea I could demonstrate in Haskell, I knew of a similar idea or three in Scheme.

My circumstance reminds me of a comment by Patrick Lam to a blog entry at Learning Curves:

I've noticed that computer science faculty technology usage often seems to be frozen to when they start their first tenure-track job. Unclear yet if I'll get stuck to 2008 technology.

Lam is wise to think consciously of this now. I know I did not. Then again, I think my track record learning new technologies, languages, and tools through the 1990s, in my first decade as a tenure-track professor, holds up pretty well. I picked up several new programming languages, played with wikis, adopted and used various tools from the agile community, taught courses in several new courses that required more than passing familiarity with the tools of those subdisciplines, and did a lot of work in the software patterns world.

My pace learning new technologies may have slowed a bit in the 2000s, but I've continued to learn new things. Haskell, Ruby, Subversion, blogs, RSS, Twitter, ... All of have become part of my research, teaching, or daily practice in the last decade. And not just as curiosities next to my real languages and tools; Ruby has become one of my favorite programming languages, alongside old-timers Smalltalk and Scheme.

A language that doesn't affect
the way you think about programming,
is not worth knowing.

-- Alan Perlis,
Epigrams on Programming
Alan Perlis

At some point, though, there is something of a "not again..." feeling that accompanies the appearance of new tools on the scene. CVS led to Subversion, which led to ... Darcs, Mercurial, Git, and more. Which new tool is most worth the effort and time? I've always had a fondness for classics, for ideas that will last, so learning yet another tool of the same kind looks increasingly less exciting as time passes. Alan Perlis was right. We need to spend our time and energy learning things that matter.

This approach carries one small risk for university professors, though. Sticking with the classics can leave one's course materials, examples, and assignments looking stale and out of touch. Any CS 1 students care to write a Fahrenheit-to-Celsius converter?

In the 1990s, when I was learning a lot of new stuff in my first few years on the faculty, I managed to publish a few papers and stay active. However, I am not a "research professor" at a "research school", which is Lam's situation. Hence the rest of his comment:

Also unclear if getting stuck is actually necessary for being successful faculty.

As silly as this may sound, it is a legitimate question. If you spend all of your time chasing the next technology, especially for teaching your courses, then you won't have time to do your research, publish papers, and get grants. You have to strike a careful balance. There is more to this question than simply the availability of time; there is also a matter of mindset:

Getting to the bottom of things -- questioning assumptions, investigating causes, making connections -- requires a different state of mind than staying on top of things.

This comes from John Cook's Getting to the Bottom of Things. In that piece, Cook concerns himself mostly with multitasking, focus, and context switching, but there is more. The mindset of the scientist -- who is trying to understand the world at a deep level -- is different than the mindset of the practitioner or tool builder. Time and energy devoted to the latter almost certainly cannibalizes the time and energy available for the former.

As I think in these terms, it seems clearer to me one advantage that some so-called teaching faculty have over research faculty in the classroom. I've always had great respect for the depth of curiosity and understanding that active researchers bring to the classroom. If they are also interested in teaching well, they have something special to share with their students. But teaching faculty have a complementary advantage. Their ability to stay on top of things means that their courses can be on the cutting edge in a way that many research faculty's courses cannot. Trade-offs and balance yet again.

For what it's worth, I really am intrigued by the possibilities offered by Scala and Clojure for my Programming Languages course. If we can have all of the beauty of other functional languages at the same time as a connection to what is happening out in the world, all the better. Practical connections can be wonderfully motivating to students -- or seem seem cloyingly trendy. Running on top of the JVM creates a lot of neat possibilities not only for the languages course but also for the compilers course and for courses in systems and enterprise software development. The JVM has become something of a standard architecture that students should know something about -- but we don't want to give our students too narrow an experience. Busy, busy, busy.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 08, 2009 6:32 PM

Quick Hits on the Way Out of Dodge

Well, Carefree. But it plays the Western theme to the hilt.

This was a shorter conference visit than usual. Due to bad weather on the way here, I arrived on the last flight in on Sunday. Due to work constraints of my workshop colleagues, I am heading out before the Wednesday morning session. Yet it was a productive trip -- like last year, but this time on our own work, as originally planned. We produced

  • the beginnings of a catalog of data-driven real-world problems used in CS1 courses across the country, and
  • half of a grant proposal to NSF's CPATH program, to fund some of our more ambitious ideas about programming for everyone, including CS majors.
A good trip.

Yesterday over our late afternoon break, we joined with the other workshop group and had an animated discussion started by a guy who has been involved with the agile community. He claimed that XP and other agile approaches tell us that "thinking is not allowed", that no design is allowed. A straw man can be fun and useful for exploring the boundaries of a metaphor. But believing it for real? Sigh.

A passing thought: Will professionals in other disciplines really benefit from knowing how to program? Why can't they "just" use a spreadsheet or a modeling tool like Shazam? This question didn't come to mind as a doubt, but as a realization that I need a variety of compelling stories to tell when I talk about this with people who don't already believe my claim.

While speaking of spreadsheets... My co-conspirator Robert Duvall was poking around Swivel, a web site that collects and shares open data sets, and read about the founders' inspiration. They cited something Dan Bricklin said about his own inspiration for inventing the spreadsheet:

I wanted to create a word processor for data.

Very nice. Notice that Bricklin's word processor for data exposes a powerful form of end-user programming.

When I go to conferences, I usually feel as if the friends and colleagues I meet are doing more, and more interesting, things than I -- in research, in class, in life. It turns out that a lot of my friends and colleagues seem to think the same thing about their friends and colleagues, including me. Huh.

I write this in the air. I was booked on a 100% full 6:50 AM PHX-MSP flight. We arrive at the airport a few minutes later than planned. Rats, I have been assigned a window seat by the airline. Okay, so I get on the plane and take my seat. A family of three gets on and asks me hopefully whether there is any chance I'd like an aisle seat. Sure, I can help. (!) I trade out to the aisle seat across the aisle so that they can sit together. Then the guy booked into the middle seat next to me doesn't show. Surprise: room for my Macbook Pro and my elbows. Some days, the smile on me in small and unexpected ways.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

April 03, 2009 1:24 PM

Debugging by Biction

Gus Mueller, creator of one of my favorite tools, VoodooPad, recently wrote a short note on debugging his designs and programs:

I learned a long time ago that the two best debugging tools I own are a nice piece of paper, and a good pencil.

Bic pens

Writing something down is a great way to "think out loud". My Ghostbusters-loving colleague, Mark Jacobson, calls this biction. He doesn't define the term on his web page, though he does have this poetic sequence:

Bic pen, ink flowing, snow falling, writing, thinking, playing, dancing

That sounds fanciful, but biction is a nuts-and-bolts idea. The friction of that Bic pen on the paper is when ideas that are floating fuzzily through the mind confront reality.

Mark and I taught a data structures course together back in the 1990s, and we had a rule: if students wanted to ask one of us a question, they had to show us a picture they had drawn that illustrated their problem: the data structure, pointers, a bit of code, ... If nothing else, this picture helped us to understand their problem better. But it usually offered more. In the process of showing us the problem using their picture, students often figured out the problem in front of our eyes. Other students commented that, while drawing a picture in preparing to ask a question, they saw the answer for themselves. Biction.

Of course, one can also "think out loud" out loud. In response to my post on teaching software engineering, a former student suggested that I should expose my students to pair programming, which he found "hugely beneficial" in another course's major project, or at least to rubber duck debugging. That's biction with your tongue.

It may be that the most important activity happens inside our heads. We just need to create some friction for our thoughts.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 31, 2009 6:43 AM

Teaching Software Engineering

We offer a lot of our upper-division courses on a three-semester rotation. For the last few years, I have been teaching Programming Languages and our compilers course as a part of our rotation. Before I became department head, I taught Algorithms -- a course I love -- in the third slot. I passed the course on to someone else my first semester as head, and I've never gotten it back. The person who teaches it now is really good, and this leaves me to be a utility infielder, covering whatever most needs to be covered that semester. I've taught our intro course in this slot, as well as a set of 1-credit courses on individual programming languages.

This fall, I will teach a course I have never taught before: software engineering. This is a traditional introduction to the discipline for juniors and seniors:

Study of software life cycle models and their phases -- planning, requirements, specifications, design, implementation, testing, and maintenance. Emphasis on tools, documentation, and applications.

I have taught non- traditional software engineering before, in the form of two offerings of a course on agile software development. In fact, I taught that course during my first full semester after starting this blog, and it -- along with and sometimes in tandem with training for my second marathon -- provided a wealth of inspiration for my writing.

Long-term readers of this blog know that I have expressed skepticism about the software engineering metaphor, both in general and in some of the specifics, such as gathering requirements. Still, I respect the goals of the discipline and the effort and earnestness of its proponents. I also am able to put aside philosophical concerns in the interest of department concerns. We all agree that developing software, especially in the large, is a challenging endeavor that requires knowledge and skill. That is the goal of our course, and it will be the goal of my offering this fall.

The course I teach needs to stay pretty close to the traditional notion of software engineering, because that's what our curriculum calls for. By most accounts, employers who hire our students are pretty happy with what we teach in the course, in particular the breadth of exposure we give them. This is one of the constraints I face as I plan.

That said, I will certainly teach my own course. I will make sure that students are exposed to the agile approaches. Many of our alumni have told me that their employers are introducing elements of XP or Scrum into their software development processes, and that they wish they had learned a bit about them during their undergraduate studies. This is the right place for them to encounter agile principles and practices, as a different perspective on the phases of the software lifecycle.

I also tend more toward the working-code end of the spectrum, whereas this course has historically tended toward the modeling-and-abstraction end. I'd like to find a way to ensure that students see and experience both. Models are nothing more than pictures on a whiteboard or in a CASE tool until we can implement them in code. Besides, our students seem to graduate with an alarming lack of exposure to modern tools for version control, build management, and testing. I'd like to find a way for them to learn not only what the stages of software development are but also tools and practices for effectively evolving programs through those stages.

I'm looking for papers and ideas that can help me design a good course that balances several of these forces. Karen Reid's and Greg Wilson's Learning by doing is a great exemplar; it talks about how to introduce version control by using a tool like CVS or SVN to deliver, manage, and collect student assignments.

In the end, I want this course to serve well students wherever they end up after graduation, whether at a big-process company such as Rockwell Collins or a mostly-agile shop consisting of a few developers. Most of our graduates will end up working in several different development environments within a few years, regardless of where they land, so broad principles and understanding trump mastery of any particular skill set. But I also know that mastery plays a big role in how they will come to understand the principles.

Whatever I do, I want the course to be real, not merely an academic exercise.

My daily bleg: please send me any thoughts you have on this course, ways to organize it, or tools to use. As always, I value your suggestions and will gladly share the credit!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 25, 2009 4:16 PM

Anger and Starting Again

While running yesterday, I was thinking back to my entry on starting again. I say in that entry that having reach a certain level of accomplishment, however meager, makes starting over tough. It's not the same as a newcomer starting from scratch; in some objective sense the newcomer faces a bigger challenge. But starting over creates a new sort of psychological hurdle that adds to the physical challenge. When you've run 60-mile weeks and 10x800m speed workouts, trying to string together 3-mile runs on consecutive days can be, well, demoralizing.

My mind then wandered of to a message from a reader who is former student. He sent me a note in response to Small Surprises While Grading that went far beyond the specific surprises I mentioned -- great stuff on what had been in his mind while taking the course. I thought of this specific comment:

I'm not sure why [there were so many 0s in the course gradebook], but every assignment left me in a bad mood. My mood after each assignment was worse than the previous one. The second to last assignment I did well on, but left me angrier than I have been in a long time.

I wonder if part of this student's bad mood and anger comes from the same thing I'm feeling while running? He is a successful programmer, nearing the end of his undergraduate work. Then this course comes along and changes the rules: functional programming, Scheme, recursion, language grammars. Maybe he felt like he was starting over. Knowing what it felt like to master a programming language, to whip out programs, and to feel good after an assignment, perhaps this experience felt all the more painful, all the more insurmountable.

Though I have thought back to his message several times in the last couple of months, I didn't know to ask him this question until now. I'll ask. But regardless of his answer, I think the feeling I have occasionally while running these days gives me insight to what some students might be feeling.

I also now realize consciously one advantage that I have as a runner who "has reached the mountaintop" over a brand new runner: I know what it feels like to break through the barrier.

One of the more remarkable experiences on a race course is the dramatic deliverance from the depths of discomfort to the rebirth of spirit, endurance, and performance. There's nothing like breaking through the pain barrier, and finding a better and stronger runner on the other side.

-- from a Running Times article on endurance runner Lisa Smith-Batchen

Knowing that feeling is how I put the feeling of re-climbing the mountain in perspective, why any sense of despair seems to evaporate as quickly as it condenses in my mind. I will get back to long runs and faster times. The newbie may not be so confident.

Oh, and as for learning CS: If many students feel what my correspondent says he felt, or if a few students feel that way often, then that is probably sign of a failure in the design of our curriculum, or of my course. I don't mind that students feel uncomfortable for a while as they learn; that is both natural and necessary. Anger and despair are another matter.


Posted by Eugene Wallingford | Permalink | Categories: Running, Teaching and Learning

March 20, 2009 9:09 PM

At Least It's Not Too Easy

Tim Bray talks about how photography is too easy to learn:

Quoting from About Photography (1949) by American photographer Will Connell (hat tip Brendan MacRae): "Every medium suffers from its own particular handicap. Photography's greatest handicap is the ease with which the medium as such can be learned. As a result, too many budding neophytes learn to speak the language too long before they have anything to say."

Programming doesn't seem to suffer from this problem! Comments to Bray's entry about books like "C for Dummies" notwithstanding, there are not many people walking around who think programming is too easy. Mark Guzdial has described the reaction of students taking a non-majors course with a computational economics theme when they found out they would have to do a little scripting in Python. Most people who do not already have an interest in CS express disdain for programming's complexity, or fear of it. No one likes to feel stupid. Perhaps worst of all, even students who do want to major in CS don't want to program.

We in the business seem almost to have gone out of our way to make programming hard. I am not saying that programming is or can be "easy", but we should stop erecting artificial barriers that make it harder than it needs to be -- or that create an impression that only really smart people can write code. People who have ideas can write. We need to extend that idea to the realm of code. We cannot make professional programmers out of everyone, any more than piano and violin lessons can make professional musicians out of everyone. But we ought to be able to do what music teachers can do: help anyone become a competent, if limited, practitioner -- and come to appreciate the art of programming along the way.

The good news is that we can solve this "problem", such as it is. As Guzdial wrote in another fine piece:

An amazing thing about computing is that there are virtually no ground rules. If we don't like what the activity of programming is like, we can change it.

We need to create tools that expose powerful constructs to novices and hide the details until later, if they ever need to be exposed. Scratch and Alice are currently popular platforms in this vein, but we need more. We also need to connect the ability to program with people's desires and interests. Scripting Facebook seems like the opportunity du jour that is begging to be grasped.

I'm happy to run across good news about programming, even if it is only the backhanded notion that programming is not too easy. Now we need to keep on with the business of making certain that programming is not too hard.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 18, 2009 8:01 AM

Setting a Good Example

David Patterson wrote a Viewpoint column for the March 2009 issue of Communications on advising graduate students, paired with a column by Jeffrey Ullman. One piece of Patterson's advice applies to more than advising grad students: "You're a role model; act like one.":

I am struck from parenting two now-grown sons that it's not what you say but what you do that has lasting impact. I bet this lesson applies to your academic progeny. Hence, I am conscious that students are always watching what I do, and try to act in ways that I'd like them to emulate later.

For example, my joy of being a professor is obvious to everyone I interact with, whereas I hear that some colleagues at competing universities complain to their students how hectic their lives are.

I often worry about the message I send students in this regard. My life is more hectic and less fun with computer science since I became department head, and I imagine that most of the negative vibe I may give off is more about administration than academia. One time that I am especially careful about the image I project is when I meet with high school students who are prospective CS majors and their parents. Most of those encounters are scheduled in advance, and I can treat them almost like performances. But my interactions with current students on a daily basis? I'm probably hit-and-miss.

The idea that people will infer more from your deeds than your words is not new and does apply widely, to advisors, teachers, decision makers -- everyone, really. Anyone who has been a parent knows what Patterson means about having raised his sons. Long ago I marked this passage from Matthew Kelly's Building Better Families:

If you ask parents if they want their children to grow up to live passionate and purposeful lives they will say, "Absolutely!" But how many parents are living passionate and purposeful lives? Not so many.

Our example can set a negative tone or a positive tone. The best way to give children a zest for life is to live with zest and share your zest with them.

This applies to our students in class and in the research lab, too. My favorite passage in this regard comes not from Patterson's viewpoint but from The Wednesday Wars, which I quoted once before:

It's got to be hard to be a teacher all the time and not jump into a pool of clear water and come up laughing and snorting with water up your nose.

Through all my years in school, my best teachers jumped into the pool all the time and came up laughing and snorting with water up their noses. They wrote prose and code. They read about new ideas and wanted to try them out in the lab. Their excitement was palpable. Fun was part of the life, and that's what I wanted.

I hope I can embody a little of that excitement and fun as a faculty member to our students, as a father to my daughters. But some days, that is more of a challenge than others.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

March 09, 2009 3:26 PM

Sweating The Small Stuff

On Learning Curves, I read a lament about calculus students having a hard time putting their skills into practice on some basic word problems. This line stood out:

They can do the calculus. The algebra slays them.

Of late, our CS faculty have been discussing a programming corollary. Students have passed their intro programming course and a data structures course. They get to an intermediate programming course, or a programming languages course, or an AI course, where they learn some more advanced design idea or algorithm. They answer conceptual courses about the material on exams and do well. Then they try to write code... and hit a wall.

Sometimes a new programming language gets in the way, but sometimes not -- students are using a language they used for a year in the intro sequence. And whether it's a new language or an old one, the problems seem ticky-tack: semicolons, declarations, function calls.

They can talk about the advanced concepts, but simple programming tasks to implement the ideas slays them. The programming part looks like attention to detail to me, or effort spent to internalize basic grammar and vocabulary. One prof half-jokingly says his students have gone out of their way to forget what they already knew.

You don't really know programming unless you can write a program from a real problem, and not just a tightly-specified exercise designed by the instructor. And I'm not sure you can really know a concept -- not in the way that a computer scientist needs to know it -- unless you can write a program using it for a real problem. If syntax is in the way, you simply need to buckle down and learn the syntax.

I don't have any answers on this but thought it was interesting that a calculus prof is running into the same kind of problem.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 25, 2009 6:45 PM

Notes for Students Working on Projects

My compiler students are getting to the point where they should be deep in writing a parser for their language. Walking back from lunch, I was thinking about some very simple things they could do to make their lives -- and project -- better.

1.   Start.

Yes, start. If you read the literature of the agile software development world or even of the life hacker world, people talk about the great power that comes just from taking the first step. I've always loved the old Goethe quote about the power of committing to a course of action. But isn't this all cliché?

It is so easy to put off tracing your language grammar, or building those FIRST and FOLLOW sets, or attacking what you know will be a massive parsing table. It is so easy to be afraid of writing the first line of code because you aren't sure what the whole app will look like.

Take the first step, however small and however scary. I'm always amazed how much more motivated I feel once I break the seal on a big task and have some real feedback from my client or my compiler.

2.   Always have live code.

Live code is always better than ideas in your head. Brian Marick tells us so. One of the Gmail guys tells us so:

We did a lot of things wrong during the 2.5 years of pre-launch Gmail development, but one thing we did very right was to always have live code. ...

Of course none of the code from my prototype ever made it near the real product (thankfully), but that code did something that fancy arguments couldn't do (at least not my fancy arguments), it showed that the idea and product had real potential.

Your code can tell which ideas are good ones and which are bad ones. It can teach you about the app you are building. It can help you learn what your user wants.

I hear this all the time from students: "We have a pretty good handle on this, but no code yet." Sounds good, but... Live code can convince your professor that you really have done something. It can also help you ask questions and be submitted on the due date. Don't underestimate the value in that.

As Buchheit says from the Gmail experience, spend less time talking and more time prototyping. You may not be Google, but you can realize the same benefits as those guys. And with version control you don't have to worry about taking the wrong step; you can always back up.

3.   Don't forget what you know.

Okay, I have to admit that this did not occur to me on my walk home form lunch. This afternoon, a former student and local entrepreneur gave a department seminar on web app security. He twice mentioned that many of the people he hires have learned many useful skills in object-oriented design and software engineering, using system languages such as Java and Ada. When they get to his shop, they are programming in a scripting language such as PHP. "And they throw away all they know!" They stop using the OOP principles and patterns they have learned. They stop documenting code and testing. It's as if scripting occurs in a different universe.

As he pointed out after the talk, all of those skills and techniques and practices matter just as much -- no, more -- when using a language with many power tools, few boundaries, and access to all of his and his clients' data and filesystem.

When building a compiler in class, or any other large-scale team project in a capstone course, all of those skills and techniques and practices matter, too, and sometimes for the first time in student's career. This is likely the largest and most sophisticated program they have ever written. It is the first time they have ever had to depend on one or two or five other students to get done, to understand and work with others' code, to turn their own code other for the understanding and use of their teammates.

There is a reason that you are learning all this stuff. It's for the project you are working on right now.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 24, 2009 12:01 PM

Even More on Programming and Computational Thinking

Confluence... The Education column in the February 2009 issue of Communications of the ACM, Human Computing Skills: Rethinking the K-12 Experience, champions computational thinking in lieu of programming:

Through the years, despite our best efforts to articulate that CS is more than "just programming," the misconception that the two are equivalent remains. This equation continues to project a narrow and misleading image of our discipline -- and directly impacts the character and number of students we attract.

I remain sympathetic to this concern. Many people, including lost potential majors, think that CS == programming. I don't know any computer scientists who think that is true. I'd like for people to understand what CS is and for potential majors who end up not wanting to program for a living to know that there is room for them in our discipline. But pitching programming to the aside altogether is the wrong way to do that, and will do more harm than good -- even for non-computer scientists.

It seems to me that the authors of this column conflate CS with programming at some level, because they equate writing a program with "scholarly work" in computer science:

While being educated implies proficiency in basic language and quantitative skills, it does not imply knowledge of or the ability to carry out scholarly English and mathematics. Indeed, for those students interested in pursuing higher-level English and mathematics, there exist milestone courses to help make the critical intellectual leaps necessary to shift from the development of useful skills to the academic study of these subjects. Analogously, we believe the same dichotomy exists between CT, as a skill, and computer science as an academic subject. Our thesis is this: Programming is to CS what proof construction is to mathematics and what literary analysis is to English.

In my mind, it is a big -- and invalid -- step from saying "CT and CS are different" to saying that programming is fundamentally the domain of CS scholars. I doubt that many professional software developers will agree with a claim that they are academic computer scientists!

I am familiar with Peter Naur's Programming as Theory Building, which Alistair Cockburn brought to the attention of the software development world in his book, Agile Software Development. I'm a big fan of this article and am receptive to the analogy; I think it gives us an interesting way to look at professional software development.

But I think there is more to it than what Naur has to say. Programming is writing.

Back to the ACM column. It's certainly true that, at least for many areas of CS, "The shift to the study of CS as an academic subject cannot .. be achieved without intense immersion in crafting programs." In that sense, Naur's thesis is a good fit. But consider the analogy to English. We all write in a less formal, less intense way long before we enter linguistic analysis or even intense immersion in composition courses. We do so as a means of communicating our ideas, and most of us succeed quite well doing so without advanced formal training in composition.

How do we reach that level? We start young and build our skills slowly through our K-12 education. We write every year in school, starting with sentences and growing into larger and larger works as we go.

I recall that in my junior year English class we focused on the paragraph, a small unit of writing. We had written our first term papers the year before, in our sophomore English course. At the time, this seemed to me like a huge step backward, but I now recognize this as part of the Spiral pattern. The previous year, we had written larger works, and now we stepped back to develop further our skills in the small after seeing how important they were in the large.

This is part of what we miss in computing: the K-8 or K-12 preparation (and practice) that we all get as writers, done in the small and across many other learning contexts.

Likewise, I disagree that proof is solely the province of mathematics scholars:

Just as math students come to proofs after 12 or more years of experience with basic math, ...

In my education, we wrote our first proofs in geometry -- as sophomores, the same year we wrote our first term papers.

I do think one idea from the article and from the CT movement merits more thought:

... programming should begin for all students only after they have had substantial practice acting and thinking as computational agents.

Practice is good! Over the years, I have learned from CS colleagues encountered many effective ways to introduce students, whether at the university or earlier, to ideas such as sorting algorithms, parallelism, and object-oriented programming by role play and other active techniques -- through the learner acting as a computational agent. This is an area in which the Computational Thinking community can contribute real value. Projects such as CS Unplugged have already developed some wonderful ways to introduce CT to young people.

Just as we grow into more mature writers and mathematical problem solvers throughout our school years, we should grow into more mature computational thinkers as we develop. I just don't want us to hold programming out of the mix artificially. Instead, let's look for ways to introduce programming naturally where it helps students understand ideas better. Let's create languages and build tools to make this work for students.

As I write this, I am struck by the different nouns phrases we are using in this conversation. We speak of "writers", not "linguistic thinkers". People learn to speak and write, to communicate their ideas. What is it that we are learning to do when we become "computational thinkers"? Astrachan's plea for "computational doing" takes on an even more XXXXX tone.

Alan Kay's dream for Smalltalk has always been the children could learn to program and grow smoothly into great ideas, just as children learn to read and write English and grow smoothly into the language and great ideas of, say, Shakespeare. This is a critical need in computer science. The How to Design Programs crowd have shown us some of the things we might do to accomplish this: language levels, tool support, thinking support, and pedagogical methods.

Deep knowledge of programming is not essential to understand all basic computer science, some knowledge of programming adds so very much even to our basic ideas.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 19, 2009 4:32 PM

More on Programming and Computational Thinking

I've heard from a few of you about my previous post. People have strong feelings in both directions. If you haven't seen it already, check out Mark Guzdial's commentary on this topic. Mark explores a bit further what it means to understand algorithms and data structures without executing programs, and perhaps without writing them. I'm glad that he is willing to stake out a strong position on this issue.

Those of you who receive inroads, SIGCSE's periodical, should watch for a short article by Owen Astrachan in the next issue, called "Cogito Ergo Hack". Owen hits the target spot-on: without what he calls "computational doing", we miss a fantastic opportunity to help people understand computational ideas at a deeper level by seeing them embodied in something they themselves create. Computational doing might involve a lot of different activities, but programming is one of the essential activities.

We need as many people as possible, and especially clear thinkers and writers like Mark and Owen, to ask the questions and encourage others to think about what being a computational thinker means. Besides, catchy phrases like "computational doing" and "Cogito Ergo Hack" are likely to capture the attention of more people than my pedestrian prose!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 17, 2009 9:31 AM

Computational Thinking without Programming

Last week, I read a paper on changes how statistics is taught. In the last few decades, more schools have begun to teach stats conceptually, so that the general university graduate might be able to reason effectively about events, variation, and conditions. This contrasts with the older style in which it was taught as a course for mathematicians, with the focus on formulas and mastery of underlying theorems. The authors said that the new style emphasized statistical thinking, rather than traditional statistics.

For some reason, this brought to mind the buzz around "computational thinking" in the CS world. I have to be honest: I don't know exactly what people mean when they talk about computational thinking. I think the idea is similar to what the stats guys are talking about: using the ideas and methods discovered in computer science to reason about processes in the real world. I certainly agree that most every college graduate could benefit from this, and that popularizing these notions might do wonders for helping students to understand why CS is important and worth considering as a major and career.

But when I look at the work that passes under the CT banner, I have a hard time distinguishing computational thinking from what I would call a publicly-accessible view of computer science. Maybe that's all it is: an attempt to offer a coherent view of CS for the general public, in a way that all could begin to think computationally.

The one thing that stands out in all the papers and presentations about CT I've seen is this: no programming. Perhaps the motivation for leaving programming out of the picture is that people find it scary and hard, so omitting it makes for a more palatable public view. Perhaps some people think that programming isn't an essential part of computational thinking. If it's the former, I'm willing to cut them some slack. If it's the latter, I disagree. But that's not surprising to readers here.

While thinking on this, I came across this analogy: computational thinking with no programming is like statistical thinking without any mathematics. That seems wrong. We may well want stats courses aimed at the general populace to emphasize application and judgment, but I don't think we want students to see statistics devoid of any calculation. When we reason about means and variance, we should probably have some idea how these terms are grounded in arithmetic that people understand and can do.

When I tried my analogy out on a colleague, he balked. We don't need much math to reason effectively in a "statistical" way, he said; that was precisely the problem we had before. Is he overreacting? How well can people understand the ideas of mean and standard deviation without knowing how to compute them? How little math can they know and still reason effectively? He offered as an example the idea of a square root. We can understand what a square root and what it means without knowing how to calculate one by hand. Nearly every calculator has a button for the square root, and most students' calculators these days have buttons for the mean -- and maybe the variance; I'll have to look at my high school daughter's low-end model to see.

For the most part, my colleague feels similarly about programming for everyone. His concern with CT is not eliminating programming but what would be taught in lieu of programming. Many of the articles we have seen on CT seem to want to replace programming with definitions and abstractions that are used by professional computer scientists. The effect is to replace teaching a programming language with teaching a relatively formal "computational thinking" language. In his mind, we should replace programming with computational skills and ideas that are useful for people involved in everyday tasks. He fears that, if we teach CT as if the audience is a group of budding computer scientists, we will make the same mistake that mathematics often has: teaching for the specialists and finding out that "everyone else is rotten at it".

The stats teaching paper I read last week says all the right things. I should look at one of the newer textbooks to see how well they carry it out, and how much of the old math skills they still teach and require.

I'm still left thinking about how well people can think computationally without learning at least a little programming. To the extent that we can eliminate programming, how much of what is left is unique to computer science? How far can we take the distinction between formula and algorithm without seeing some code? And what is lost when we do? There is something awfully powerful about watching a program in action, and being able to talk about and see the difference between dynamic behavior and static description.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 11, 2009 4:42 PM

Another Take on Embracing Failure

A student reader sent me a message to say that he didn't click with my talk of 'embracing' failure. He prefers to think in terms of pushing through failure to reach success. So I found it interesting when I ran across a blog entry with a similar them (via The Third Bit) on the same day:

I'm writing this in order to talk about failure and coping.... I've got the failure down. So what about coping? ... failure is really information. You don't fail and therefore become a failure. You fail and in so doing you learn and gain more understanding.

... I've come to understand that if I'm not making mistakes it means I'm not trying hard enough, and I'm not pushing myself far enough.

In this article, David Humphrey recounts the details of a vexing experience trying to fix a bug. Humphrey uses the bug -- unresolved by the end of the article -- to explain how he feels about failing repeatedly. He isn't broken; he is empowered by the information he has captured. Whether we call that embracing failure, both the inevitability of failing and the knowledge we gain by failing, or coping, or gathering information, it seems to be an important trait of people who enjoy computing.

If you'd like to read more on Humphrey's ideas after he fixed his bug, check out his follow-up article.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 09, 2009 4:31 PM

Embracing Failure

A while back, I clipped this quote from a university publication, figuring it would decorate a blog entry some day:

The thing about a liberal arts education ... is it prepares you to fail successfully and learn from that failure. ... You will all fail. That's OK.

-- Jim Linahon

Linahon is an alumnus of our school who works in the music industry. He gave a talk on campus for students aspiring to careers in the industry, the theme of which was, "Learn to fail. It happens"

More recently, I ran across this as the solution to a word puzzle in our local paper:

You've got to jump off cliffs all the time and build your wings on the way down.

-- Ray Bradbury

Bradbury was one of my favorite authors when I was growing up (The Martian Chronicles mesmerized me!) This quote goes farther than Linahon's: what other people call failure is learning to fly. Do not fear.

A comment made at the Rebooting Computing summit about "embracing failure" brought these quotes back to mind, along with an e-mail message Rich Pattis wrote sometime last year. Rich talked about how hard our discipline must feel to beginners, because it is a discipline learned almost wholly by failure. Learning to program can seem like nothing more than an uninterrupted sequence failures: syntax errors, logic errors, boundary cases, ugly interface, .... I'm not a novice any more, but I still feel the constant drip of failure whenever I work on a piece of code I don't already understand well.

The thing is, I kinda like that feeling -- the challenge of scaling a mountain of code. My friends who program can at least say that they don't mind it, and the best among them seem to thrive in such an environment. I think that's part of what separates programmers from less disturbed people.

Then again, repeated failure is a part of learning many things. Learning to play a musical instrument or a sport require repeated failure for most people. Hitting a serve in tennis, or a free throw on the hardcourt, or a curve ball in baseball -- the only way to learn is by doing it over and over, failing over and over, until the mind and body come together in a memory that make success a repeatable process. This seems to be an accepted part of athletics, even among the duffers who only play for fun. How many people in America are on a golf course this day, playing the game poorly but hoping -- and working -- to get better?

Why don't we feel the same way about academics, and about computer programming in particular? Some small number seem to, maybe the 2% that Knuth said are capable of getting it.

I have heard some people say that in sports we have created mechanisms for "meaningful failure", though I'm not sure exactly what that means, but I suspect that if we could build tools for students and set problems before them that give them a sense of meaningful failure, we'd probably not scare off so many people from our early courses. I suspect that this is part of what some people mean when they say we should make our courses more fun, though thinking in terms of meaningful failures might give us a better start on the issue than simply mantras about games and robots.

I don't think just equating programming to sports is enough. Mitch Wand sent a message to the PLT Scheme mailing list this weekend on a metaphor he has been using to help students want to stick to the design recipes of How to Design Programs:

In martial arts, the first thing you learn is to do simple motions very precisely. Ditto for ballet, where the first thing you learn is the five positions.

Once those are committed to muscle memory, you can go on to combinations and variations.

Same deal for programming via HtDP: first practice using the templates until you can do it without thinking. Then you can go on to combinations and variations.

I like the analogy and have used a similar idea with students in the past. But my experience is that this only works for students who want to learn to programming -- or martial arts or ballet, for that matter. If you start with people who want to go through the process of learning, then lots of things can work. The teacher just needs to motivate the student every once in a while to stick with the dream. But it's already their dream.

Maybe the problem is that people want to play golf and the martial arts -- for whatever social, business, or masochistic reasons -- but that most people don't want to learn to program? Then our problem comes back to a constant theme on this blog: putting a feeling of power in peoples' hands when we show them programming, so they want to endure the pain.

One last quote, in case you ever are looking for a literary way to motivate students to take on tough challenges rather than little ones that acquiesce easily and making us feel good sooner:

What we choose to fight is so tiny!
What fights us is so great!
...
When we win it's with small things,
and the triumph itself makes us small.
...
Winning does not tempt that man.
This is how he grows: by being defeated, decisively,
by constantly greater beings.

This comes from Rainer Maria Rilke's The Man Watching. What a marvelous image, growing strong by being beaten -- decisively, less -- by ever greater opponents. I'm sure you professional programmers who have been tackling functional programming, continuations, Scheme, Haskell, and Erlang these last few years know just the feeling Rilke describes, deep in the marrow of your bones.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 19, 2009 9:53 AM

Rebooting the Public Image of Computing

In addition to my more general comments on the Rebooting Computing summit, I made a lot of notes about the image of the discipline, which was, I think, one of the primary motivations for many summit participants. The bad image that most kids and parents have of careers in computing these days came up frequently. How can we make computing as attractive as medicine or law or business?

One of my table mates told us a story of seeing brochures for two bioinformatics programs at the same university. One was housed in the CS department, and the other was housed with the life sciences. The photos used in the two brochures painted strikingly different images in terms of how people were dressed and what the surroundings looked like. One looked like a serious discipline, while the other was "scruffy". Which one do you think ambitious students will choose? Which one will appeal to the parents of prospective students? Which one do you think was housed in CS?

Sometimes, the messages we send about our discipline are subtle, and sometimes not.

Too often, what K-12 students see in school these days under the guise of "computing" is applications. It is boring, full of black boxes with no mystery. It is about tools to use, not ideas for making things. After listening to several people relate their dissatisfaction with this view of computing, it occurred to me that one thing we might do to immediately improve the discipline's image is to get what currently passes for computing out of our schools. It tells the wrong stories!

The more commonly proposed solution is to require CS in K-12 schools and do it right. Cutting computing would be easier... Adding a new requirement to the crowded K-12 curriculum is a tall task fraught with political and economic roadblocks. And, to be honest, our success in presenting a good image of computing through introductory university courses doesn't fill me with confidence that we are ready to teach required CS in K-12 everywhere.

Don't take any of these thoughts too seriously. I'm still thinking out loud, in the spirit of the workshop. But I don't think there are any easy or obvious answers to the problems we face. One thing I liked about the summit was spending a few days with many different kinds of people who care about the problems and who all seem to be trying something to make things better.

The problems facing computing are not just about image. Some think to think so, but they aren't. Yet image is part of the problem. And the stories we tell -- explicitly and implicitly -- matter.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 17, 2009 3:09 PM

Notes on the Rebooting Computing Summit

the Rebooting Computing logo

[This is the first of several pieces on my thoughts at the Rebooting Computing summit, held January 12-14 in San Jose, California. Later articles talk about image of CS, thoughts on approach, thoughts on personal stories and, as always, a little of this and that.]

This was unusual: a workshop with no laptops out. Some participants broke the rule almost immediately, including some big names, but I decided to follow my leaders. That meant no live note-taking, except on paper, nor any interleaved blogging. I decided to expand this opportunity and take advantage of an Internet-free trip, even back in the hotel!

The workshop brought together 225 people or so from all parts of computer science: industry, universities, K-12 education, and government. We had a higher percentage of women than the discipline as a whole, which made sense given our goals, and a large international contingent. Three Turing Award winners joined in: Alan Kay, Vinton Cerf, and Fran Allen, who was an intellectual connection to the compiler course I put on hold for a few days in order to participate myself. There were also a number of other major contributors to computing, people such as Grady Booch, Richard Gabriel, Dan Ingalls, and the most famous of three Eugenes in the room, Gene Spafford.

Most came without knowing what in particular we would do for these three days, or how. The draw was the vision: a desire to reinvigorate computing everywhere.

a photo of the Computer History Museum's Difference Engine

The location was a perfect backdrop, the Computer History Museum. The large printout of a chess program on old green and white computer paper hanging from the rafters near our upper-floor meeting room served as a constant reminder of a day when everything about computers and programming seemed new and exciting, when everything was a challenge to tackle. The working replica of Charles Babbage's Difference Engine on the main floor reminded us that big dreams are good -- and may go unfulfilled in one's lifetime.

The Introduction

Peter Denning opened with a few remarks on what led him to organize the workshop. He expressed his long-standing dissatisfaction with the idea that computer science == programming. Dijkstra famously proclaimed that he was a programmer, and proud of it, but Denning always knew there was something bigger. What is missing if we think only of programming? Using his personal experience as an example, he claimed that the outliers in the general population who make enter CS have invested many hours in several kinds of activity: math, science, and engineering.

Throughout much of the history of computer science, people made magic because they didn't know what they were doing was impossible. This gave rise to the metaphor driving the workshop and his current effort -- to reboot computing, to clean out the cruft. We have non-volatile memory, though, so we can start fresh with wisdom accumulated over the first 60-70 years of the discipline. (Later the next day, Alan Kay pointed out that rebooting leaves same operating system and architecture in place, and that what we need to do is redesign them from the bottom up, too!)

A spark ignited inside each of us once that turned us onto computing. What was it? Why did it catch? How? One of the goals of the summit was to find out how we can create the same conditions for others.

The Approach

The goal of the workshop was to change the world -- to change how people think about computing. The planning process destroyed the stereotypes that the non-CS facilitators held about CS people.

The workshop was organized around Appreciative Inquiry (AI, but not the CS one -- or the one from the ag school!), a process I first heard about in a paper by Kent Beck. It uses a an exploration of positive experiences to help understand a situation and choose action. For the summit, this meant developing a shared positive context about computing before moving on to the choosing of specific goals and plans.

Our facilitators used an implementation they characterized as Discovery-Dream-Design-Destiny. The idea was to start by examining the past, then envision the future, and finally to return to the present and make plans. Understanding the past helps us put our dreams into context, and building a shared vision of the future helps us to convert knowledge into actions.

One of our facilitators, Frank Barrett, is a former prof jazz musician, said an old saying from his past career is "Great performances create great listeners." He believes, though, that the converse is true: "Great listeners create great performances." He encouraged us to listen to one another's stories carefully and look for common understanding that we could convert into action.

Frank also said that the goal of the workshop is really to change how people talk, not just think, about computing. Whenever you propose that kind of change, people will see you as a revolutionary -- or as a lunatic.

a photo of one of the workshop posters drawn live

An unusual complement to this humanistic approach to the workshop was a graphic artist who was recording the workshop live before our eyes, in image and word. Even when the record was mostly catchphrases that could have become part of a slide presentation, the colors and shapes added a nice touch to the experience.

The Spark

What excited most of us about computing was solving a problem -- having some something that that was important to us, sometimes bigger than we could do easily by hand, and doing it with a computer. Sometimes we enjoyed the making of things that could serve our needs. Sometimes we were enlivened by making something we found beautiful.

Still, a lot of people in the room "stumbled" or "drifted" into CS from other places. Those words carry quite different images of peoples' experiences. However subtle the move, they all seemed to have been working on real problems.

One of the beauties of computer science is that it is in and about everything. For many, computing is a lens through which to study problems and create solutions. Like mathematics, but more.

One particular comment made the first morning stood out in my mind. The gap between what people want to make with a computer and what they can reasonably make has widened considerably in the last thirty years. What they want to make is influenced by what they see and use every day. Back in 1980 I wanted to write a program to compute chess ratings, and a bit of BASIC was all I needed. Kids these days walk around with computational monsters in their pockets, sometimes a couple, and their desires have grown to match. Show them Java or Python, let alone BASIC, and they may well feel deflated before considering just what they could do.

Computing creates a new world. It builds new structures on top of old, day by day. Computing is different today than it was thirty years ago -- and so is the world. What excited us may well not excite today's youth.

What about what excited us might?

(Like any good computer scientist, I have gone meta.)

Educators cannot create learning. Students do that. What can educators provide? A sense of quality. What is good? What is worth doing? Why?

History

A lot of great history was made and known by the people in this room. The rest of us have lived through some of it. Just hearing some of these lessons reignited the old spark inside of me.

Consider Alan Turing's seminal 1935 paper on the Halting Problem. Part of the paper is Turing "thumbing his nose" at his skeptical mathematician colleagues, saying "The very question is computation. You can't escape it."

One time, Charles Babbage was working with his friend, the astronomer Herschel. They were using a set of astronomy tables to solve a problem, and Babbage became frustrated by errors in the tables. He threw the book at a wall and said, "I wish these calculations had been executed by steam!" Steam.

Ada Lovelace referred to Babbage's Difference Engine as a machine that "weaves patterns of ideas".

Alan Kay reminded us of the ultimate importance of what Turing taught us: If you don't like the machine you have, you can make the machine you want.

AI -- the computing kind, which was the source of many of my own initial sparks -- has had two positive effects on the wider discipline. First, it has always offered a big vision of what computing can be. Second, even when it struggles to reach that vision, it spins off new technologies it creates along the way.

At some point in the two days, Alan Kay chastised us. Know your history! Google has puts all seventy-five or so of Douglas Engelbart's papers at your fingertips. Do you even type the keywords into the search box, let alone read them?

About Computing

A common thread throughout the workshop was, what are the key ideas and features of computing that we should not lose as we move forward? There were some common answers. People want to solve real problems for real people. They want to find ideas in experience and applications. Another was the virtue of persistence, which one person characterized as "embracing failure" -- a twisted but valuable perspective. Yet another was the idea of "no more black boxes", whether hardware or software. Look inside, and figure out what makes it tick. None of these are unique to CS, but they are in some essential to it.

Problem solving came up a lot, too. I think that people in most disciplines "solve problems" and so would claim problem solving as an essential feature of the discipline. Is computer science different? I think so. CS is about the process of solving problems. We seek to understand the nature of algorithms and how they manipulate information. Whichever real problem we have just solved, we ask, "How?" and "Why?" We try to generalize and understand the data and the algorithm.

Another common feature that many thought essential to computing is that it is interdisciplinary. CS reaches into everything. This has certainly been one of the things that has attracted me to computing all these years. I am interested in many things, and I love to learn about ideas that transcend disciplines -- or that seem to but don't. What is similar and dissimilar between two problems or solutions? Much of AI comes down to knowing what is similar and what is not, and that idea held my close attention for more than a decade.

While talking with one of my table mates at the summit, I realized that this was one of the biggest influences my Ph.D. advisor, Jon Sticklen, had on me. He approached AI from all directions, from the perspectives of people solving problems in all disciplines. He created an environment that sought and respected ideas from everywhere, and he encouraged that mindset in all who studied in his lab.

Programming

While I respect Denning's dissatisfaction with the idea that computer science == programming, I don't think we should lose the idea of programming whatever we do to reinvigorate computing. Whatever else computing is, in the end, it all comes down to a program running somewhere.

When it was my turn to describe part of my vision for the future of computing, I said something like this:

When they have questions, children will routinely walk to the computer and write a program to find the answers, just as they now use Google, Wikipedia, or IMDB to look up answers.

I expected to have my table mates look at me funny, but my vision went over remarkably well. People embraced the idea -- as long as we put it in the context of "solving a problem". When I ventured further to using a program "to communicate an idea", I met resistance. Something unusual happened, though. As the discussion continued, every once in a while someone would say, "I'm still thinking about communicating an idea with a program...". It didn't quite fit, but they were intrigued. I consider that progress.

Closing

At the end of the second day, we formed action groups around a dozen or so ideas that had a lot of traction across the whole group. I joined a group interested in using problem-based learning to change how we introduce computing to students.

That seemed like a moment when we would really get down to business, but unfortunately I had to miss the last day of the summit. This was the first week of winter semester classes at my university, and I could not afford to miss both sessions of my course. I'm waiting to hear from other members of my group, to see what they discussed on Wednesday and what we are going to do next.

As I was traveling back home the next day, I thought about whether the workshop had been worth missing most of the first week of my semester. I'll talk more about the process and our use of time in a later entry. But whatever else, the summit put a lot of different people from different parts of the discipline into one room and got us talking about why computing matters and how we can help to change how the world thinks -- and talks -- about it. That was good.

Before I left for California, I told a colleague that this summit held the promise of being something special, and that it also bore the the risk of being the same old thing, with visionaries, practitioners, and career educators chasing their tails in a vain effort to tame this discipline of ours. In the end, I think it was -- as so many things turn out to be -- a little of both.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 07, 2009 6:18 PM

Looking Ahead -- To Next Week

The latest edition of inroads, the quarterly publication of SIGCSE, arrived in my mailbox yesterday. The invited editorial was timely for me, as I finally begin to think about teaching this spring. Alfred Aho, co-author of the dragon book and creator of several influential languages and programming tools, wrote about Teaching the Compilers Course. He didn't write a theoretical article or a solely historical one, either; he's been teaching compilers every semester for the last many years at Columbia.

As always, I enjoyed reading what an observant mind has to say about his own work. It's comforting to know that he and his students face many of the same challenges with this course as my students and I do, from the proliferation of powerful, practical tools to the broad range of languages and target machines available. (He also faces a challenge I do not -- teaching his course to 50-100 students every term. Let's just say that my section this spring offers ample opportunity for familiarity and one-on-one interaction!)

Two of Aho's ideas are now germinating in my mind as I settle on my course. The first is something that has long been a feature of my senior project courses other than compilers: an explicit focus on the software engineering side of the project. Back when I taught our Knowledge-Based Systems course (now called Intelligent Systems) every year, we paid a lot of attention to the process of writing a large program as a team, from gathering requirements and modeling knowledge to testing and deploying the final system. We often used a supplementary text on managing the process of building KBS. Students produced documents as well as code and evaluated team members on their contributions and efforts.

When I moved to compilers five or six years ago after a year on sabbatical, I de-emphasized the software engineering process. First of all, I had smaller classes and so no teams of three, four, or five. Instead, I had pairs or even individuals flying solo. Managing team interactions became less important, especially when compared to the more intellectually daunting content of the compiler course. Second, the compiler students tended to skew a little higher academically, and they seemed to be able to handle more effectively the challenge of writing a big program. Third, maybe I got a little lazy and threw myself into the fun content of the course, where my own natural inclinations lie.

Aho has his students work in teams of five and, in addition to writing a compiler and demoing at the end of the semester:

  • write a white paper on their source language,
  • write a tutorial on using it, and
  • close with a substantial project report written by every member of the team.

This short article has reawakened my interest in having my students -- many of whom will graduate into professional careers developing software and managing its development -- attend more to process. I'll keep it light, but these three documents (white paper, tutorial, and project report) will provide structure to tasks the students already have to do, such as to understand their source language well and to explore the nooks and crannies of its use.

The second idea from Aho's article is to have students design their own language to compile. This is something I have never done. It is also a feature that brings more value to the writing of a white paper and a tutorial for the language. I've always given students a language loosely of my own design, adapted from the many source languages I've encountered in colleagues' courses and professional experience. When I design the language, I have to write specs and clarifications; I have to code sample programs to demonstrate the semantics of the language and to test the students' compilers at the end of the semester.

I like the potential benefits of having students design their own language. They will encounter some of the issues that the designers of the languages they use, such as Java, C++, and Ada faced. They can focus their language in a domain or a technology niche of interest to them, such as music and gaming or targeting a multi-core machine. They may even care more about their project if they are implementing an engine that makes their own creation come to life.

If I adopt these course features, they will shift the burden between instructor and student in some unusual ways. Students will have to exert more energy into the languages of the course and write more documentation. I will have to learn about their languages and follow their projects much more intimately as the semester proceeds in order to be able to provide the right kind of guidance at the right moments. But this shift, while demanding different kinds of work on both our parts, should benefit both of us. When I design more of the course upfront, I have greater control over how the projects proceed. This gives me a sense of comfort but deprives the students of experiences with language design and evolution that will serve them well in their careers. The sense of comfort also deprives me of something: the opportunity to step into the unknown in real-time and learn. Besides, my students often surprise me with what they have to teach me.

As I said, I'm just now starting to think about my course in earnest after letting Christmas break be a break. And I'm starting none to soon -- classes begin Monday. Our first session will not be until Thursday, however, as I'll begin the week at the Rebooting Computing summit. This is not the best timing for a workshop, but it does offer me the prospect of a couple of travel days away from the office to think more about my course!


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 02, 2009 10:42 PM

Fly on the Wall

A student wrote to tell me that I had been Reddited again, this time for my entry reflecting on this semester's final exam. I had fun reading the comments. Occasionally, I found myself thinking about how a poster had misread my entry. Even a couple of other posters commented as much. But I stopped myself and remembered what I had learned from writers' workshops at PLoP: they had read my words, which must stand on their own. Reading over a thread that discusses something I've written feels a little bit like a writers' workshop. As an author, I never know how others might interpret what I have written until I hear or read their interpretation in their own words. Interposing clarifying words is a tempting but dangerous trap.

Blog comments, whether on the author's site or on a community site such as Reddit, do tend to drift far afield from the original article. That is different from a PLoP workshop, in which the focus should remain on the work being discussed. In the case of my exam entry, the drift was quite interesting, as people discussed accumulator variables (yes, I was commenting on how students tend to overuse them; they are a wonderful technique when used appropriately) and recursion (yes, it is hard for imperative thinkers to learn, but there are techniques to help...). Well worth the read. But I could also see that sometimes a subthread comes to resemble an exchange in the children's game Telephone. Unless every commenter has read the original article -- in this case, mine -- the discussion tends to drift monotonically away from the content of the original as it loses touch with each successive post. Frankly, that's all right, too. I just hope that I am not held accountable for what someone at the end of the chain says I wrote...


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

December 30, 2008 9:47 AM

Feeling Incompetent

My daughters received a new game from their mom for Christmas. It's called Apples to Apples. Each round, one of the players draws a card with an adjective on it. The rest of the players choose noun cards from their hands that match the adjective. The judge chooses one of the nouns as the best match, and the player who played it wins that round. The objective of the game is to win the most rounds.

I could tell you many things more about the game and how it's played in my family, but there is really only one thing to say:

I stink at this game.

If I am in a three-person game, I finish third. Four players? Fourth. You name the number of players, and I can tell you where I'll finish. Last.

It doesn't seem to matter with whom I play. Recently, I've been playing with my wife and daughters. Last night, my wife's brother joined us. My wife and I have played this game before, with friends from my office. The result is always the same.

My weakness may be heightened by the lobbying that can be part of the game. Players are allowed to try to sell their answers to the judge. I'm not a good salesman and besides don't really like to sell. But that doesn't account for my losing. If we play in silence, I lose.

It's not that I'm bad at all word games. I like many word games and generally do well playing them. If nothing else, I get better after I play a game for a while, by figuring out something about the strategy of the game and the players with whom I play. But in this game, the harder I try to play well, the worse I seem to do.

This must be how students feel in class sometimes. There is some consolation -- that I might become more empathetic as a result of feeling this way -- but, to be honest, it's just a bad feeling.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

December 28, 2008 9:34 PM

Small Surprises While Grading

Early last week, I spent the last couple of days before Christmas wrapping up the grades on my Programming Languages course for fall semester. While grading the final exam, I seemed surprised by something on almost every problem. Here are a few that stand out:

cons and list

... are not the same. We spent some time early in the semester looking at how cons allocates a single new cell, and list allocates one cell per argument. Then we used them in a variety of ways throughout the rest of the course. After fifteen weeks programming in Scheme, how can so many people confuse them?

Overuse of accumulator variables

... is endemic to undergraduate students learning to program functionally. Two of the exam problems asked for straightforward procedures following the structural recursion pattern. These problems were about as simple examples of structural recursion as you can find: The largest value in a binary tree is the larger of

  • the largest value in the left subtree, and
  • the largest value in the right subtree.

The zip of two lists is a list with a list of their cars consed into the zip of their cdrs. Many students used an accumulator variable to solve both problems. Some succeeded, with unnecessarily complex code, and some solutions buckled under the weight of the complexity.

Habits are hard to break. I have colleagues who tell me that OOP is easy. I look at their code and say, "yes, but...." The code isn't really OO; it just bears the trappings of classes and methods. Sequential, imperative programming habits run deep. An accumulator variable is often a crutch used by a sequential programmer to avoid writing a functional solution. I see that in my own code occasionally -- the accumulator is as often a code smell as a solution.

At a time when the world is looking to parallel computing in a multicore world, we need to find a way to change the habits programmers form, either by teaching functional programming better or by teaching functional programming sooner so that students form different habits.

Scheme procedure names

... are different than primitive procedure names in most other languages. They are bound to their values just like every other symbol. They can be re-bound, either locally or globally, using the same mechanisms used to bind values to any other names. This means that in this code:

    (lambda (f g)
      (lambda (x)
        (+ (f x) (g x))))

the + symbol is a free variable, bound at the top level to the primitive addition operator. After we talked about this idea several times through the semester, I threw the students a bone on the final with a question that asked students to recognize + symbol as a free variable in a piece of code just like this one. The bone sailed past most of them.

Bound and free variables

... remain a tough topic for students to grasp, at least from my teaching. We spent several days in class talking about the idea of bound and free variables, and then writing code that could check a piece of code for bound and free variables. One of those sessions made a point of pointing out that occurs bound does not equal does not occur free, and that occurs free does not equal does not occur bound. For one thing, a variable could occur both bound and free in the same piece of code. For another, it might not occur at all! Yet when a final exam problem asked students to define an occurs-bound? procedure, several of them wrote the one-liner (not (occurs-free? x exp)). If only they knew how close they were... But they wrote that one-liner without understanding.

Syntactic abstraction

... is an idea that befuddles many of my students even after half a semester in which we work with the idea. Our Quiz 3 is tough for many of the students; it is often their lowest quiz grade of the course. In past semesters, though, students seemed to go home after being disappointed with their Quiz 3 score, hit the books, and come away with some understanding. This semester, several students came to the final exam with the same hole in their knowledge -- including students with two of the top three scores for the course. This makes me sad and disappoints me.

I can't do much to ensure that students will care enough to hit the books to overcome their disappointments, but I can change what I do. The next time I teach this course, I will probably have them start by working with for and while constructs in their own favorite languages. Maybe by stripping away the functional programming wrapping and the Scheme code we use to encounter these ideas, they will feel comfortable in a more familiar context and see the idea of syntactic abstraction to be really quite simple.

Postlude

Am I romanticizing the good old days, when men were men and all students went home and learned it all? Maybe a little, but I had a way to ground my nostalgia. I went back and checked the grades students earned in recent offerings of this course, which had very much the same content and structure. The highest score this semester was higher than the top score in the two most recent offerings, by 2-4%. Overall, through, grades lower. In fact, after the top score, the entire group had shifter down a whole letter grade. I don't think the students of the recent past were that much smarter or better prepared than this group, but I do think they had different attitudes and expectations.

One piece of evidence for this conclusion was that this semester there were far more 0s in the final grid of grades. I even had a couple of 0s on quizzes, where students simply failed to show up. This is, of course, much worse than simply scoring lower, because one things students can control is whether they do their work and submit assignments. As I wrote recently, each person must control what he or she can control. I am not sure how best to get juniors and seniors in college to to adopt this mindset, but maybe I'll need to bring Twyla Tharp -- or Bobby Knight -- into my classroom.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

December 18, 2008 4:28 PM

You Are Here → X

the famous You Are Here → X picture

As I type, my students are taking the final exam in my programming languages course. A couple might prefer to be reading my blog than taking an exam, but perhaps not. I always enjoyed final exams as a student. They marked the end of something and offered a challenge.

My students can also rest comfortable tonight in the notion that their duties for the course are behind them, yet I still face grading the behemoth I am foisting on them right now. Even still, I am already beginning to put this course behind me, thinking back to what we've learned and ahead to what comes next for a few us, the course on programming language translation.

In my mind, I keep coming back to a punch line I heard on TV the other night:

All I'm saying is, if you keep living and dying on whether or not a person changes, well... you're not gonna make it as a doctor, that's all.

This is Dr. Cox, the "sarcastic, bitter mentor" of the protagonist on the show Scrubs. I don't watch the show a lot, but I have seen this episode a few times, and it ends with a scene in which this line is the lone heartfelt moment among Cox's typical schtick. His protege is having a hard time accepting that one of his patients, who had sidestepped a cancer scare, lights up a cigarette as he leaves the hospital, fully intent on resuming a habit that may kill him. Cox wants the newbie to see that a doctor simply cannot take personally the behaviors of all his patients; all he can do is treat them and, as best he can, try to teach them how to be healthy. The rest is up to them.

Students are a lot like patients. They come to us for help -- not treatment, but presumably education. My role as "doctor" is, as best I can, to teach them how to think and act like a computer scientist or programmer. But not all students have the same commitment, or motivation, or resources as the rest. It's easy to get caught up our own desire to teach, excite, and communicate and forget that not every student will leave the room inspired or changed. If a teacher lives and dies in his own mind on whether or not every student in a class "gets it" or leaves the room changed, well, he is going to have a long and painful life in the classroom. Every course will seem a failure.

The news is not all that bleak, though. Some students leave the room inspired. That energy can carry me a long way. And most students leave a course changed, if only a little bit. For some, that may be all the farther it goes. But for others, that little change is a seed waiting for the right conditions to come along some time in the future. At that moment, it will bloom into something no one can predict.

I guess this week I'm feeling a bit like Uncle Bob, who has been writing a lot of code lately -- hurray! -- but is disappointed in his performance. Programmers think about what it is like to program in an ideal world, just as teachers idealize what will happen in the classroom. When they get into the trenches, though, they encounter their own weaknesses and frailty. I run into that as a programmer sometimes, and as a teacher, too. Like Uncle Bob, I can recite a litany of how badly I do what I do: continually fighting the demons that say, take it easy and just lecture; they'll get it; the constant allure of not grading an assignment thus not being able to give the prompt feedback I know that some students need; do what is expedient, not what's best; it will all work out in the end. But does it?

Uncle Bob closes with:

There is much I have yet to learn about writing software well. So, although after 56 years of life, and 43 years of programming, I have achieved a modicum of success, and even some glory, Chef Baglio is right. It is from that point that you really start to learn.

You start learning _here_, at this point, for whatever the current value of "this" is.

As I was thinking about my final exam last week, I started to wonder what terms and concepts would be most on my students' minds as they studied. So I created a wordle:

a wordle of my class notes, 810:154 Fall 2008

This makes for an odd summary of the semester, a flashlight onto my vocabulary, and an unusual piece of art.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 11, 2008 7:37 AM

Movin' Out, Twyla Tharp, and Inspiration

a scene from the Broadway musical Movin' Out

Last month my wife and I had the good fortune to see a Broadway touring company perform the Tony Award-winning Movin' Out, a musical created by Twyla Tharp from the music of Billy Joel. I've already mentioned that I am a big fan of Billy Joel, so the chance to listen to his songs for two hours was an easy sell. Some of you may recall that I also wrote an entry way back called Start with a Box that was inspired by a wonderful chapter from Twyla Tharp's The Creative Habit. So even if I knew nothing else about Tharp, Movin' Out would have piqued my interest.

This post isn't about the show, but my quick review is: Wow. The musicians were very good -- not imitating Joel, but performing his music in a way that felt authentic and alive. (Yes, I sang along, silently to myself. My wife said she saw my lips moving!) Tharp managed somehow to tell a compelling story by stitching together a set of unrelated songs written over the long course of Joel's career. I know all of these songs quite well, and occasionally found myself thinking, "But that's not what this song means...". Yet I didn't mind; I was hearing from within the story. And I loved the dance itself -- it was classical even when modern, not abstract like Merce Cunningham's Patterns in Space and Sound. My wife knows dance well, and she was impressed that the male dancers in this show were actually doing classical ballet. (In many performances, the men are more props than dancers, doing lifts and otherwise giving the female leads a foil for their moves.)

Now I see that Merlin Mann is gushing over Tharp and The Creative Habit. Whatever else I can say, Mann is a great source of links... He points us to a YouTube video of Tharp talking about "failing well", as well as the first chapter of her book available on line. Now you can read a bit to see if you want to bother with the whole book. I echo Mann's caveat: we both liked the first chapter, but we liked the rest of the book more.

Since my post three years ago on The Creative Habit, I've been meaning to return to some of the other cool ideas that Tharp writes about in this book. Seeing Movin' Out caused me to dig out my notes from that summer, and seeing Mann's posts has awakened my desire to write some of the posts I have in mind. The ideas I learned in this book relate well to how I write software, teach, and learn.

Here is a teaser that may connect with agile software developers and comfort students preparing for final exams:

The routine is as much a part of the creative process as the lightning bolt of inspiration, maybe more. And this routine is available to everyone.

Oddly, this quote brings to mind an analogy to sports. Basketball coaches often tell players not to rely on having a great shooting night in order to contribute to the team. Shooting is like inspiration; it comes and it goes, a gift of capricious gods. Defense, on the other hand, is always within the control of the player. It is grunt work, made up of effort, attention, and hustle. Every player can contribute on defense every night of the week.

For me, that's one of the key points in this message from Tharp: control what you can control. Build habits within which you work. Regular routine -- weekly, daily, even hourly -- are the scaffolding that keep you focused on making something. What's better, everyone can create and follow a routine.

While I sit and wait for the lightning bolt of inspiration to strike, I am not producing code, inspired or otherwise. Works of inspiration happen while I am working. Working as a matter of routine increases the chances that I will be producing something when the gods smile on me with inspiration. And if they don't... I will still be producing something.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

December 10, 2008 6:27 AM

Echoes

This is the busy end to a busier-than-usual semester. As a result, my only opportunity and drive to blog come from echoes. Sometimes that's okay.

Running

... or not. After six weeks or so of 26-28 miles a week -- not much by standards, but a slow and steady stream -- November hit me hard. Since 11/03 I've managed only 6-8 miles a week and not felt well the days after. My doctors are running out of possible diagnoses, which is good in one way but bad in another. In addition to the blog echo, I have an actual echo running through my head, from John Mellencamp's "Junior": Sometimes I feel better / But I never do feel well.

Building Tools

As we wrap up the semester's study of programming languages, my students took their final quiz today. I used the free time before the quiz to show them how we could imperative features -- an assignment operator and sequences of statements -- to a simple functional interpreter that they have been building over the course of the last few homework assignments. After writing a simple cell data type (10 lines of code) to support mutable data, we added 25 or so lines of code to their interpreter and modified 5 or so more. That's all it took.

I'm always amazed by what we can do in a few lines of code. Those few lines also managed to illustrate several of the ideas students encountered this semester: map, currying, and even a higher-order type predicate. Today's mini-demonstration has me psyched to add more features to the language, to make it more useful both as a language and as an example of how language works. If only we had more time...

After class, I was talking with a student about this and that related to class, internships, and programming. He commented that he now groks what The Pragmatic Programmer says about writing your own tools and writing programs to generate code for you. I smiled and thought, yep, that's what programmers do.

40th Anniversaries

Today was one of the 40th anniversaries I mentioned six weeks ago: Douglas Engelbart's demonstration of a mouse-controlled, real time-interactive, networked computer. SFGate heralds this as the premiere of the PC, but this event has always seemed more about interaction than personal computing. Surely, the kind of interactivity that Engelbart showed off was a necessary precursor to the PC, but this demonstration was so much more -- it showed that people can interact with digital media and, yes, programs in a way that connects with human needs and wants. Engelbart's ideas will out-live what we know as the personal computer.

No matter, though. The demonstration inspired a generation. A friend of mine sent a note to all his friends today, asking us to "drink a toast to Douglas Engelbart" and reminiscing on what personal computing means to many of us:

Think how much this has changed our lives... The communication capabilities allow us to communicate extremely quickly, throughout the globe. The PC, and Internet, allow me to have friends in Australia, Belfast, Brazil, China, Holland, India, Japan, London, Switzerland, and many other places. ... Can you even picture a world without PC's? I've seen and used them in remote places like Nosy Be, Madagascar, and Siem Reap, Cambodia.

The world is a different place, and many of us -- my friend included -- contribute to that. That humbles and awes me.

Programming Matters

Don't need programmers, only smart people? Or people with good "people skills"? Today on the Rebooting Computing mailing list, Peter Norvig wrote:

At Google, a typical team might be 1 software architect, 5 software designers who are also responsible for development, testing, and production, and one product manager. All 7 would have CS BS degrees, and maybe 2 MS and 2 PhDs. Programming is a significant part of the job for everyone but the product manager, although s/he typically has programming experience in the past (school or job). Overall, programming is a very large part of the job for the majority of the engineering team.

Sure, Google is different. But at the end of the day, it's not that different. The financial services companies that hire many of my university's graduates are producing business value through information technology. Maximizing value through computing is even more important in this climate of economic uncertainty. Engineering and scientific firms hire our students, too, where they work with other CS grads and with scientists and engineers of all sorts. Programming matters there, and many of the programmers are scientists. The code that scientists produce is so important to these organizations that people such as Greg Wilson would like to see us focus more on helping scientists build better software than on high-performance computing.

Those who can turn ideas into code are the masters of this new world. Such mastery can begin with meager steps, such as adding a few lines of code to an interpreter make imperative programming come alive. It continues when a programmer looks at the result and says, "I wonder what would happen if..."


Posted by Eugene Wallingford | Permalink | Categories: Computing, Running, Teaching and Learning

December 04, 2008 7:25 PM

The Development Arc of a Program and a Teaching Idea

While reading on-line this summer, I ran across a description of how keyless remote entry systems work. I can't find that page just now, but here is a nice description. I never knew how those things worked. Very cool indeed! As I was reading, the programmer in me soon was thinking, "I'd like to write that code". The teacher in me almost as quickly thought, "What a great example for for my programming languages course...": programs with state and shared data. A homework problem was waiting to be written!

This week we got to the point in the course at which this is a perfect homework problem. I started thinking about it, but the week was busy... Finally, the night before I was to post the assignment, I sat down to write my solution. Within an hour I have less than a page of code that implements:

  • a random number generator, based on a C program from my grad school operating systems course, which itself was based on an article by Park and Miller in the Communications of the ACM, and

  • a pair of functions that implement the remote entry idea for a single behavior (say, unlock) and up through the receiver re-syncing with a skipped-ahead transmitter.

While thinking before programming -- even agile programmers are allowed to do that -- I realized that "re-programming" a desynchronized transmitter/receiver pair would be beyond scope of my homework assignment and that multiple behaviors (say, unlock and lock, turning on an alarm, etc.) added complexity to the code but no interesting ideas. I also realized that there was no shared data in this problem, but two pieces of state held in common: identical random number generators and a key code.

As I programmed, it slowly dawned on me that this problem, in its full form, was surely beyond the scope of my homework assignment. I had a couple of other tasks for them to do, and the keyless entry problem would require both a whole assignment to itself and fairly detailed guidance for the students on how to implement a solution. The interaction between the transmitter and the receiver means that the solution code has to be developed and tested in parallel, and there turned out more to be more layers of indirection in the solution than I had expected: a lambda wrapping a letrec wrapping a let wrapping a letrec wrapping a lambda wrapping one final lambda!

That's more complexity than I care for my students to encounter in the context of this assignment. With more time and more of the assignment "real estate", I could perhaps leave design a solution to students and then guide them during the process. But this is a programming languages course, and I'm trying to keep the focus of the course on features of languages, not on the application of functional programming or Scheme.

(I do love to have students see functional programming and Scheme applied to "real problems", because so often they view the programming language applications as, well, not real. This problem is especially cool for that purpose, because the ability to create a simple closure over two simple functions is so useful here.)

Sigh. The opportunist in me, though, thought, "No problem. Perhaps can will demo the solving of this problem in class."

And now I had a new problem: The only code I have is the final version of my program.

I can still build a demo, designing a session around how I grew this program, but I will have to re-grow it in my mind and in my lecture notes. Unfortunately, the second time through a solution is never quite like the first for me, and in a way that effects the quality of result. The second implementation usually feels and looks a little too pat, a little too straightforward, too obvious. Most people don't learn much about how to build something by looking only at the final product, and when the final product looks inevitable at every turn, all hope is lost. The process of writing the code, and the decisions made along the way matter -- the insights and the false starts; the intermediate steps.

All I have is my final version. If only I had developed this program under version control! At least then I'd have a sequence of intermediate solutions that could help me recover the sequence of decisions and insights and false starts.

I know some people use version control for more than developing software; Martin Fowler has even wrote about putting his whole file system under Subversion. I may not want to go that far, but what about the particular case of developing code for a new demo, lecture, or presentation?

This seems like one of those ideas someone has already had and profited from. I'm just now getting it. Do you ever do this for purposes of teaching or exposition?

Oh, well. I had a lot of fun implementing this code, and that is always a welcome joy. I may yet show it off in class, if only as a finished product. We programmers are often like artists and other creators, and even like parents: we are so proud of our children that we simply must show everyone and brag on them! But turning this solution into a powerful class session about using state and mutually-referential functions must wait until I have more time. Maybe next time I'll try building my new idea code under version control and be able to move more quickly.

Now that keyless remote entry is off the table as a homework problem, I am back to the drawing board for a couple of new problems to round out this assignment. Well, the random number generator is a nice stateful problem in its own right. And if I can just simplify the idea of the transmitter/receiver pair, maybe I can create a problem in the spirit of keyless entry systems -- only with security circa 1970 -- that salvages some of my initial excitement for this problem. Hmm....

~~~~

Postscript.    I wrote the first draft of this entry last night, right after whipping up my solution and despairing. Current students of mine know that I did salvage the idea, because I have already set the homework problems in question before them! The simplification was easier to make than I had feared. The transmitter and receiver use a fixed code and can never be synched. That made all the difference in the complexity of the solution. I left the other parts for extra-credit...


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 10, 2008 7:31 PM

Workshop 6: The Next Generation of Scientists in the Workforce

[A transcript of the SECANT 2008 workshop: Table of Contents]

The last session of an eventful workshop consisted of two people. One was a last minute sub for a science speaker who had to pull out. The sub, from Microsoft Research, didn't add much science content, but did say something I wish undergrads would pick up on. What do all companies look for these days? Short ramp-up time, self-starters. These boil down to curiosity and initiative.

The second speaker gave the sort of industry report I so enjoyed last year. David Spellmeyer, a Purdue computer science and chemistry grad, is CTO and CIO at Nodality. He titled his talk, "Computational Thinking as a Competitive Advantage in Industry". I love that title! because I love the ways computing confers a competitive advantage over companies that don't get it yet. The downside of Spellmeyer talking about his own company's competitive advantage: he can't post his slides.

Spellmeyer did tell us a bit about his company's science at various points in his story. Nodality works on patient-specific classification of disease and response to therapies. At least part of that involves evaluating phosphoprotein-signaling networks. (I hope that doesn't give too much away.)

He looks for computational thinking skills in all of the scientists Nodality hires. His CT wish list included items familiar and surprising:

  • familiarity with the complexity of computing
  • exposure to programming languages
  • analytical methods for experimental studies
  • familiarity with the technology and inner workings of the computer, especially database

Edvard Munch's Scream

These skills give competitive advantage to his company -- and also to the individual! The company is able to do more better and faster. The individual has better judgment across a wider range of problems. These advantages intersect at a point where computational thinking demystifies the computer, computer systems, and programming. Understanding even a little about computers and programs helps to dispel myth of the perfect computer and the perfect computer system. Those myths create frustrations that grow into more. (Spellmeyer used another image to drive this point home: Hitchcock's North by Northwest.)

How does computational thinking help the company do more better and faster? By...

  • ... letting scientists spend more time doing what they love.
  • ... eliminating low-value-add transactional activities in the business process.
  • ... boosting the speed and scalability of their systems.

Notice that these advantages range from the scientific to business process to the technical. It's not only about techies sitting in front of monitors.

On the scientific side of the equation, Nodality has a data problem. A robust assay produces a flood of data:

106 cells/patient X 50 patients/experiment 20 challenges X 20 markers
→ 1010 data points per experiment

Thereafter followed a lot of detail that I couldn't follow in real time, which is probably just as well. There is a reason that Spellmeyer can't post his slides...

How do they eliminate low-value-added transactional activities?

  • Talk to customers.
  • Find patterns of practice.
  • Propose computational tools to improve practice.
  • Use an agile approach to gather requirements, design a system, field, get feedback, and iterate in short cycles.

Computational thinking enables scientists and techies to think of their experiments, and how to set them up, in different ways. For example, they might conceive of a way to set up a cytometer differently. They also think differently about experiment analysis and inventory management.

As Spellmeyer wrapped up, he he included a few snippets to motivate his ideas and the scale of the problems that he and his company face. He quoted Margaret Wheatley as saying that all science is a metaphor, a description of a reality we can never fully know. As a pragmatist, this is something I believe almost from the outset. He also said that in business, learning occurs naturally through normal interactions in work practices. Not in classes. "Context, community, and content" are the triumvirate that drives all they do. For this reason, his company puts a lot of effort into its community software tools.

The problem ultimately comes down to an issue at the intersection of combinatorics, pragmatics, and even ethics. We can make billions of unique molecules. Which ones should we make? We need to consider molecules similar enough to ones we understand but dissimilar enough to offer hope of a new result. This leads to a question of similarity and dissimilarity, one of those AI-complete tasks. There is room for a lot of great algorithm exploration here.

Finally, Spellmeyer weighed in on a hot topic from the previous session: Excel is a basic tool in his company. The business guys have developed an extremely complex business model, and all of their work is in Excel. But it's not just a work horse on the business side; scientists use Excel to transform data. He is happy to find scientists and techies alike who know how to use Excel at full strength.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 07, 2008 8:59 AM

Workshop 5: Curriculum Development

[A transcript of the SECANT 2008 workshop: Table of Contents]

This session was labeled "birds-of-a-feather", likely a track for short talks that didn't fit very well elsewhere. The most common feather was curriculum, efforts to develop it and determine its effect.

First up was Ruth Chabay, on looking in detail at students' computational thinking skills. She is involved in a pilot study that aims to answer the question, "Do students think differently about physics after programming?" This is the sort of outcomes assessment that people who develop curriculum rarely do. Even CS faculty -- despite the fact that we would never think of writing programs and not checking to see whether they ran correctly. This study is mostly question and method at this point, with only hints at answers.

The research methodology is a talk-aloud protocol with videotaping of the participants' behavior. Chabay showed an illustrative video of a student reasoning through a very simple program, talking about the problem. I'd love to be able to observe students from some of my courses in this way. It would be hard to gather useful quantitative data, but the qualitative results would surely give some insight into what some students are thinking when they are going their own way.

Next up was Rubin Landau, who developed a Computational Physics program at Oregon State. He started with a survey from the American Physical Society which reported what do physics grads do 5 years after they leave school. A large percentage are involved in developing software, but alumni said that the number one skill they use is "scientific problem solving". Even for those working in scientific positions, the principles of physics are far from the most important skill. Landau stressed that this does not mean that physics isn't important; it's just that students don't graduate to repeat what they learned as an undergrad. In Landau's opinion, much of physics education is driven by the needs of researchers and for graduate students. Undergraduate curriculum is often designed as a compromise between those forces and the demands of the university and its undergraduates.

Landau described the Computational Physics curriculum they created at Oregon State with the needs of undergrad education as the driving force. I don't know enough physics to follow his description in real-time, but I noticed a few futures. Students should learn two "compiled languages"; it doesn't really matter which, though he still likes it if one is Fortran. The intro courses introduce many numerical analysis concepts involving computation (underflow, rounding). This course is now so well settled that they offer it on-line with candid video mini-lectures. Upper-division courses include content that students may well work with in industry but which have disappeared from other schools' curricula, such as fluid dynamics..

Landau is fun to listen to, because he has an arsenal of one-liners at the ready. He reported one of his favorite computational physics student comments:

Now I know what's "dynamic" in thermodynamics!

Bruce Sherwood reported a physics student comment of his own: "I don't like computers." Sherwood responded, "That's okay. You're a physicist. I don't like them either." But physics students and professors need to realize that saying they don't like computers is like saying, "I don't like voltmeters." If you can't work with a voltmeter or a computer, you are in the wrong business. That's just the way the world is.

My favorite line of Landau's is one that applies as well to computer science as to physics:

We need a curriculum for doers, not monks.

The next two speakers were computer scientists. James Early described a project in which students are developing learner-centered content for their introductory computer science course. This project was motivated by last year's SECANT workshop. Most of the students in their intro course are not CS majors. The goal of the project is to excite these students about computation, so they'll take it back to their majors and put it to good use. I immediately thought, "I'd like to have CS majors get excited about computation and take it back to their major, too!" Too few CS students take full advantage of their programming skills to improve their own academic lives...

Resource link to explore: the Solomon-Felder Index of Learning Styles, which has gained some market share in engineering world. Besides, it's on-line and free!

Mark Urban-Lurain closed the session by describing the CPATH project at Michigan State, my old graduate school stomping grounds. This project is aimed at creating a process for redesigning engineering curriculum. But much of the interesting discussion revolved the fact that most engineering firms request that students have computational skills in... Excel! Several of the CS faculty in the room nodded their heads, because they have pointed this out to their colleagues and run into a stonewall. CS departments balk at such "tools". Now, Excel is not my tool of choice, but macros really are a form of programming. I've been following with interest some work in the Haskell community on programming in spreadsheets (see some of the papers here. We in CS have more powerful tools to use and teach, but we also need to meet users of computation where the live. And in many domains, that is the spreadsheet.

I ended the workshop by chatting with Urban-Lurain, with whom I came into contact as a teaching assistant. His colleague on this CPATH project is my doctoral advisor. It is a small world.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 05, 2008 8:04 PM

What Motivates Kids These Days

Who am I being when I am not seeing
a connection in the eyes of others?
-- Benjamin Zander

I was all set to write an entry about "students these days", but I see that Mark Guzdial beat me to the punch. Two earlier entries chronicled my experience in class this semester. This is a course I have taught every third semester in recent years, and before that I taught it every year or even every semester for a few years stretching back to the mid-1990s. This has given me a longitudinal view of our student population as it has performed on common content, with common materials and a common approach in the classroom. Certainly the course has evolved a bit in that time, as I try to keep the course forward-looking as well as grounded in basic content. But with all those changes, I don't think the course's fundamental character has changed. If anything, I'd be inclined to say that I do a better job now than way back when, because I've learned how to do a better job. (That may be wishful thinking, of course.)

Yet this semester has felt more challenging than I remember. If I look back at this course over the years, though, can see that there have been signs of change. The last time we offered this course, I noted that students seemed less obviously engaged in the material than in recent times. That group turned out to be well-prepared and thoughtful, but with a quiet personality and a need to see how the course fit into their goals before they made an observable commitment. Maybe in the three years since the last offering before that we have begun to see a different kind of student in CS.

Guzdial describes one of the changes that may be responsible: the broadening of the population that attends college and, indeed, is expected to. With the widening of the pool, we are likely to see more students with varying commitments to the academic enterprise. We might also see students who are less well-prepared. A common hypothesis among faculty I know is that the CS student body we have built up since the dot.com bust has been different from the group we encountered before.

Maybe that's just another example of old fogies wishing for the good old days, but I don't think so. I think we have seen a much wider range of ability, preparation, and motivation in the newer student body than we had back in the "good old days" of the 1990s. With a larger, more diverse set of students attending college now than then, this is a natural outcome.

I don't think today's students learn differently, or have diminished capacity to learn, from exposure to the internet, iPods, and Wiis. And these students are, for the most part, as well-prepared as students before, at least when we account for the increased diversity of the pool. I do think that students have different motivations and different levels of motivation than previous classes.

One of our undergrads tells a story consistent with my observation. He runs free tutoring sessions as a public service to students in our intro programming courses. He wrote recently that he doesn't get as much traffic from CS majors (his primary audience) as he had hoped. On a particular night:

Interestingly enough, all three students were non-CS majors. I'm not entirely sure what that means overall. They are taking a class they don't need -AND- they are seeking help outside of class hours. That alone is unusual. We also discussed all the concepts on paper and each student hand wrote their own notes, which was surprising to me as well.

This tutor is one of our most talented and self-motivated students, so I'm not surprised that he would notice an apparent lack of motivation among his peers. Asking for help is an odd one. In my class, I have several strong students who are scoring lower than they do in other courses, yet only a very few have asked any questions. A couple have, but not until after a quiz that deflates their spirits. I've asked them why they haven't asked questions about the puzzling material earlier. The answers are a mix of optimism ("I just assume that I'll be able to figure this out"), pride ("I don't want to give up and ask for help"), and poor time management ("well, I didn't start the assignment until...").

I was a pretty good student, but I have vivid memories of getting up early one day my sophomore year, picking up a box of punch cards, and heading over to see my Assembler II prof promptly at the start of his 8 AM office hour because I was struggling with a now-forgotten JCL issue. (8 AM?! Many of my students say that 10 AM office hours are too early!) I'm optimistic and proud, but I was also motivated to succeed -- grade-wise, if nothing else. But I suspect that I probably wanted to learn more than I wanted to save face.

As Guzdial says, it may be that today's students are motivated by different things.

... the case for why something is worth learning is increasingly borne by the teacher, ... and the sense of value for what's to be learned is increased based in vocational terms.

This has always been an issue for me when teaching functional programming and Scheme, where the language, style, and ideas are foreign to what students tend to experience in the intro course language du jour and current professional practice. But I would think it'd be easier to motivate many functional programming concepts in a day when Python, Ruby, and even serious languages like C# and Java are bringing to the masses. (Maybe that says something about my skills as a motivator...)

In any case, rather than leave the burden for what's different now at the feet of our students, we CS instructors face the challenge of figuring out how to teach differently. Add this to changes in the discipline and the need for more non-CS students to incorporate computing into their professions and lives, and the challenge becomes even more "interesting".


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 03, 2008 7:19 PM

Workshop 4: Computer Scientists on CS Education Issues

[A transcript of the SECANT 2008 workshop: Table of Contents]

The first day of the workshop ended with two panels of two computer scientists each. The first described two current projects on introductory CS courses, and the second presented two CPATH projects related to the goals of SECANT. I either knew about these projects already or was familiar with their lessons from my department's experiences, so I didn't take quite as detailed notes. Then again, maybe I was just tiring after a long day of good stuff.

On intro CS, Deepak Kumar talked about Learning Computing with Robots, which has developed a course that serves primarily non-majors, with a goal of broadening interest in computing, even as a general education course. This course teaches computing, not robotics. Kumar mentioned that the cost of materials is no longer the issue it once was. They have built the course around a robot kit that costs in the neighborhood o $110 -- about the same price as a textbook these days!

Next, Tom Cortina talked about Teaching Key Principles of Computer Science Without Programming. In many ways, Cortina was swimming against the tide of this workshop, as he argued that non-majors could (should?) learn CS minus the programming. There certainly is a lot of cool stuff that students can learn using canned tools, talking about history, and doing some light math and logic. Cortina's course in particular covers a lot of neat material about algorithms. But still I think students miss out on something useful -- even central to computing -- when they bypass programming altogether. However, if the choice is between this course and a majors-style course that leaves non-majors confused, frustrated, or hating CS, well, then, I'll take this!

The second "panel" presented two related CPATH projects. Valerie Barr of Union College described efforts creating a course in computational science across the curriculum at Union and Lafayette College. The key experience she reported was on how to build an initial audience for the course, so that later word of mouth can spread. Barr's experience sound familiar: blanket e-mail to faculty tends not to work well, but one-on-one conversations with faculty do -- especially ongoing contact and continued conversation. This sort of human contact is time-intensive, which makes it hard to scale as you move to schools much larger than Union or Lafayette. Barr said that they had had good luck dealing with people in their Career Center, who could tell students how useful computational skills are across all the majors on campus. At my school, we have had similar good results working with people in Academic Advising and Career Services. They seem to get the value of computational skills as well as or better than faculty across campus, and they have different channels than we do for reaching students over the long term.

Finally, Lenny Pitt described the iCUBED project at the University of Illinois. The one content fact I remember from Pitt's talk is that they are working to develop applied CS programs and "CS + <X>" programs within other departments. The most memorable part of his talk for me, though, was how he had reconfigured the project's acronym (which they inherited from enabling policy or legislation) based on the workshop's theme and 2008 mantra: "Infiltration: Computing Used By Every Discipline." Creative!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 31, 2008 10:52 AM

SECANT This and That

[A transcript of the SECANT 2008 workshop: Table of Contents]

As always, at this workshop I have heard lots of one-liners and one-off comments that will stick with me after the event ends. This entry is a place for me to think about them by writing, and to share them in case they click with you, too.

The buzzword of this year's workshop: infiltration. Frontal curricular assaults often fail, so people here are looking for ways to sneak new ideas into courses and programs. An incremental approach creates problems of its own, but agile software proponents understand its value.

College profs like to roll their own, but high-school teachers are great adapters. (And adopters.)

Chris Hoffman, while describing his background: "When you reach my age, the question becomes, 'What haven't you done?' Or maybe, 'What have you done well?'"

Lenny Pitt: "With Python, we couldn't get very far. Well, we could get as far as we wanted, but students couldn't get very far." Beautiful. Imagine how far students will get with Java or Ada or C++.

Rubin Landau: "multidisciplinary != interdisciplinary". Yes! Ideas that transform a space do more than bring several disciplines into the same room. The discipline is new.

It's important to keep in mind the relationship between modeling and computing. We can do model without computing. But analytical models aren't feasible for all problems, and increasingly the problems we are interested in fall into this set.

Finally let me re-run an old rant by linking to the original episode. People, when you are second or third or sixth, you look really foolish.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

October 31, 2008 10:26 AM

Workshop 3: Computational Thinking in Physics

[A transcript of the SECANT 2008 workshop: Table of Contents]

As much as computation is now changing biology, it has already changed physics. Last year's workshop had a full complement of physicists and astronomers. In their minds, it is already clear that physicists must program -- even students learning intro physics. The question is, what problems do they face in bringing more computation to physics education? This panel session shared some physicists' experience in the trenches. Bruce Sherwood, the panel chair, set the stage: We used to be able to describe physics as theory, experiment, and the interplay between the two. This is no longer true, and it hasn't been for a while. Physics is now theory, experiment, simulation, and the interplay among the three! Yet this truth is not reflected in the undergraduate physics curriculum -- even at so-called "respectable schools".

Rubin Landau described a systemic approach, a Computational Physics major he designed and implemented at Oregon State. He was motivated by what he saw as a turning inward of physics, efforts to cover all of the history of physics in the undergrad curriculum, with a focus on mathematics from the 19th century, and not looking outward to how physics is done today. (This CS educator felt immediate empathy for Landau's plight.) He noted his own embarrassment: computational physicists at major physics conferences who refuse to discuss their algorithms or the verification of their programs. This is simply not part of the culture of physics.

Students learn by doing, so projects are key to this Computational Physics curriculum. Students use a "compiled language", which is Landau's way to distinguish programming in a CS-style language from Mathematica and Maple. For him, the key is to separate the program from engine; students need to see the program as an idea. Two languages is better, as that gives students a chance to generalize the issues at play in using computation for modeling.

The OSU experience is that the political issues in changing the curriculum are much tougher to solve than the academic issues: the need for budget, the resistance of senior faculty, the reluctance of junior faculty to risk tenure, and so on.

Landau closed by saying that, for physics-minded students, using computation in physics and then taking a CS course seems to work best. He likened this to the use of Feynman diagrams in grad school: students learn to calculate with them, and then learn the field theory behind them the next year. His undergrads have several "A-ha!" moments throughout CS1. I suspect that this approach would work for a lot of CS students, too, if we can get them to use computation. Media computation is one avenue I've seen work with some.

Next up was Robert Swendsen, from Carnegie-Mellon. In the old days, physicists wrote programs because they did not know how to solve a problem analytically. Now, they compute to solve problems that no one knows how to solve analytically. (Mental note: It also lets them ask new questions.) The common problem many of us face: we tend to teach the course we took -- something of a recursion problem. (Mental note: Where is the base case? Aristotle, I suppose.)

Swendsen identified a few other challenges. Students are used to looking at equations, though if they don't get as much from them as we do, but they have no experience looking at and reasoning about data. They struggle even with low-level issues such as accuracy in terms of the number of significant digits. Further, many students do not think that computational physics is "real" physics. To them, physics == equations.

This is a cultural expectation across the sciences, a product of the few centuries of practice. Nor is it limited to students; people out in the world think of science as equations. Perhaps they pick this notion up in their high-school courses, or even in their college courses. I think that faculty in and out of the sciences share this misperception as well. The one exception is probably biology, which may account for part of its popularity as a major -- no math! no equations! I couldn't help but think of Bernard Chazelle's efforts to popularize the notion that the algorithm is the idiom of modern science.

Listening to Swendsen, I also had an overriding sense of deja vu, back to when CS faculty across the country were trying to introduce OO thinking into the first-year CS curriculum. Curriculum change must share some essential commonalities due to human nature.

Physicist Mark Haugan focused on a particular problem he sees: a lack of continuity across courses in the physics curriculum with respect to computation. Students may use computation in one course and then see no follow-through in their next courses. In his mind, students need to learn that computation is a medium for expressing ideas -- a theme regular readers of this blog will recognize. Mathematical equations are one medium, and programs are another. I think the key is that we need to discuss and work with problems where computation matters -- think Astrachan's Law -- problems for which the lack of computation would limit our ability to understand and solve the problem. This, too, echoes the OO experience in computer science education. We still face the issue that other courses and other professors will do things in a more traditional way. This is another theme common to both SECANT workshops: we need to help students feel so empowered by computation that they use it unbidden in their future courses.

The Q-n-A session contained a wonderful thread on the idea of physics as a liberal art. One person reported a comment made by a student who had taken a computational physics course and then read a newspaper article on climate modeling:

Wow. Now I know what that means.

I can think of no higher "student learning outcome" we in computer science can have for our general education and introductory programming courses: Wow. Now I know what that means.

There are many educated people who don't what "computer model" means. They don't understand what is reported in the news. There are many educated people reporting the news who don't understand the news they are reporting.

That's not right.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 30, 2008 8:39 PM

Workshop 2: Computational Thinking in the Health Sciences

[A transcript of the SECANT 2008 workshop: Table of Contents]

The next session of the workshop was a panel of university faculty working in the health sciences, talking about how they use computation in their disciplines and what the key issues are. Panel chair, Raj Acharya, from Penn State's Computer Science and Engineering department, opened with the bon mot "all science is computer science", a reference to a 2001 New York Times piece that I have been using for the last few years when speaking to prospective students, their parents, and other faculty. By itself, this statement sounds flip, but it is true in many ways. The telescope astronomers use today is as much a computational instrument as a mechanical one. Many of the most interesting advances in biology these days are really bioinformatics.

The dawn of big data is changing what we do in CS, but it's having an even bigger effect in some other sciences by creating a new way to do science. Modeling is a nascent research method based in computation: propose model, test it against the data, and iterate. Data mining is an essential step in this new process: all of the data goes into a box, and the box has to make the sense of the data. This swaps two steps in the traditional scientific method... Instead of forming a hypothesis and then testing it by collecting data, a scientist can mine a large collection of data to find candidate hypotheses, and then confirm with more traditional bench science and by checking models against other and larger data sets.

Tony Hazbun, who works in the School of Pharmacy at Purdue, talked about work in systems biology. He identified four key ideas that biologists need to learn from computer science, which echoed a talk from last year's workshop:

  • data visualization
  • database management (relational, not flat)
  • data classification (cluster analysis)
  • modeling

Hazbun made one provocative claim that I think hits the heart of why this sort of science is important. We mine data sets to see patterns that we probably would not have seen otherwise. This is approach is more objective than traditional science, in which the hypotheses we test are the ones we create out of our own experience. This is a much more personal approach -- and thus more subjective. Data mining helps us to step outside our own experience.

Next up was Daisuke Kihara, a Purdue bioinformatician who was educated in Japan. He talked about the difficulties he has had building a research group of graduate students. The main problem is that biology students have few or no skills in mathematics and programming, and CS students know little or no biology. In the US, he said, education is often too discipline-specific, with not enough breadth, which limits the kind of cross-fertilization needed by researchers in bioinformatics. My university created an undergraduate major in Bioinformatics three years ago in an effort to bridge this gap, in part because biotechnology is an industry targeted for economic development in our state.

(My mind wandered a bit as I thought about Kihara's claim about US education. If he is right, then perhaps the US grew strong technically and academically during a time when the major advances came within specific disciplines. Now that the most important advances are coming in multidisciplinary areas, we may well need to change our approach, or lose our lead. I've been concerned about this for a year or so, because I have seen the problem of specializing too soon creeping down into our high schools. But then I wondered, is Kihara's claim true? Computer science has a history grounded in applications that motivate our advances; I think it's a relatively recent phenomenon that we spend most of our time looking inward.)

In addition to technical skills and domain knowledge, scientists of the future need the elusive "problem-solving skills" we all talk about and hope to develop in our courses. Haixu Tang, from the Informatics program at Indiana contrasted the mentality of what he called information technology and scientific computing:

  • technique-driven versus problem-driven
  • general models versus specific, even novel, models
  • robust, scalable, and modular software versus accurate, efficient programs

These distinctions reflect a cultural divide that makes integrating CS into science disciplines tough. In Tang's experience, domain knowledge is not the primary hurdle, but he has found it easier to teach computer scientists biology than to teach biologists computer science.

Tang also described the shift in scientific method that computing enables. In traditional biology, scientists work from hypothesis to data to knowledge, with a cycle from data back to hypothesis. In genome science, science can proceed from data to hypothesis to knowledge, with a cycle from hypothesis back to data. The shift is from hypothesis-driven science to data-driven science. Simulation has joined theory and statistics in the methodological toolbox.

In the Q-n-A session that followed the panel, someone expressed concern with data-driven research. Too many people don't go back to do the experiments needed to confirm hypotheses found via data mining or to verify their data by independent means. The result is bad science. Olga Vitek, a statistical bioinformatician, replied that the key is developing skill in experimental design. Some researchers in this new world are learning the hard way.

The last speaker was Peter Waddell, a comparative biologist who is working to reconstruct the tree of life based on genome sequences. One example he offered was that the genome record shows primates' closest relatives to be... tree lemurs and shrews! This process is going slowly but gaining speed. He told a great story about shotgun sequencing, BLAST, and the challenges in aligning and matching sequences. I couldn't follow it, because I am a computer scientist who needs to learn more biology.

When Waddell began to talk about some of the computing challenges he and his colleagues face, I could follow the details much better. They are working with a sparse matrix that will have between 102 and 103 rows and between 102 and 109 (!!) columns. The row and column sums will differ, but he needs to generate random matrices having the same row and column sums as the original matrix. In his estimation, students almost need to have a triple major in CS, math, and stats, with lots of biology and maybe a little chemistry thrown in, in order to contribute to this kind of research. The next best thing is cross-fertilization. His favorite places to work have been where all of the faculty lunch together, where they are able to share ideas and learn to speak each other's languages.

This remark led to another question, because it "raised the hobgoblin of multidisciplinary research": an undergraduate needs seven years of study in order to prepare for a research career -- and that is only for the best students. Average undergrads will need more, and even that might not be enough. What can we do? One idea: redesign the whole curriculum to be interdisciplinary, with problems, mathematics, computational thinking, and research methods taught and reinforced everywhere. Graduating students will not be as well-versed in any one area, but perhaps they will be better at solving problems across the boundaries of any single discipline.

This isn't just a problem for multidisciplinary science preparation. We face the same problem in computer science itself, where the software development side of our discipline requires a variety of skills that are often best learned in context. The integrated curriculum suggestion made here makes me think of the integrated apprenticeship-style curriculum that ChiliPLoP produced this year.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 30, 2008 1:31 PM

Workshop 1: A Course in Computational Thinking

[A transcript of the SECANT 2008 workshop: Table of Contents]

To open the workshop, the SECANT faculty at Purdue described an experimental course they taught last spring, Introduction to Computational Thinking. It was designed by a multi-disciplinary team from physics, chemistry, biology, and computer science for students from across the sciences.

The first thing that jumped out to me from this talk was that the faculty first designed the projects that they wanted students to do, and then figured out what students would need to know in order to do the projects. This is not a new idea (few ideas are), but while many people talk about doing this, I don't see as many actually doing it. It's always interesting to see how the idea works in practice. Owen Astrachan would be proud.

The second was the focus on visualization of results as essential to science and as a powerful attractor for students. It is not yet lunch time on Day 1, but I have heard enough already to say that visualization will be a key theme of the workshop. That's not too surprising, because visualization was also a recurring theme in last year's workshop. Again, though, I am glad to be reminded of just how important this issue is outside the walls of the Computer Science building. It should affect how we prepare students for careers applying CS in the world.

The four projects in the Purdue course's first offering were:

  • manipulating digital audio -- Data representation is a jump for many students.
  • percolation in grids -- Recursion is very hard, even for very bright students. Immediate feedback, including visualization, is helpful.
  • Monte Carlo simulation of a physical system
  • protein-protein interaction -- Graph abstractions are also challenging for many students.

This looks like a broad set of problems, the sort of interdisciplinary science that the core natural sciences share and which we computer scientists often miss out on. For CS students to take this course, they will need to know a little about the several sciences. That would be good for them, too.

Teaching CS principles to non-CS students required the CS faculty to take an approach unlike what they are used to. They took advantage of Python's strengths as a high-level, dynamic scripting language to use powerful primitives, plentiful libraries, and existing tools for visualizing results. (They also had to deal with its weaknesses, not the least of which for them was the delayed feedback about program correctness that students encounter in a dynamically-typed language.) They delayed teaching the sort of software engineering principles that we CS guys love to teach early. Instead, they tried to introduce abstractions only on a need-to-know basis.

Each project raised particular issues that allowed the students to engage with principles of computing. Audio manipulation exposed the idea of binary representation, and percolation introduced recursion, which exposed the notion of the call stack. Other times, the mechanics of writing and running programs exposed underlying computing issues. For example, when a program ran slower than students expected on the basis of previous programs, they got to learn about the difference in performance between primitive operations and user-defined functions.

The panelists reported lessons from their first experience that will inform their offering next spring:

  • The problem-driven format was a big win.
  • Having students write meaningful programs early was a big win.
  • Having students see the results of their programs early via visualization was a big win.
  • Python worked well in these regards.
  • The science students' interest in computing is bimodal. Computing either has a strong appeal to them almost immediately, or the student exhibits strong resistance to computing as a tool.
  • On the political front, interaction with science faculty is essential to succeeding. They have to buy in to this sort of course, as do administrators who direct resources.

One of the open questions they are considering is, do they need or want to offer different sections of this course for different majors? This is a question many of us are facing. Having a more homogeneous student base would allow the use of different kinds of problem and more disciplinary depth. But narrowing the problem set would lose the insight available across disciplines. At a school like mine, we also risk spreading the student base so thin that we are unable to offer the courses at all.

Somewhere in this talk, speaker Susanne Hambrusch, the workshop organizer and leader, said something that made me think about what in my mind is the key to bringing computation to the other disciplines most naturally: We need to leave students thinking, "This helps me answer questions in my discipline -- better, or faster, or ...". This echoed something that Ruth Chabay said at the end of last year's workshop. Students who see the value of computation and can use computation effectively will use computation to solve their own problems. That should be one of the primary goals of any course in computing we teach for students outside of CS.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 30, 2008 10:08 AM

Notes on the SECANT Workshop: Table of Contents

This set of entries records my experiences at the 2008 SECANT 2008 workshop October 30-31, hosted by the Department of Computer Science at Purdue University.

Primary entries:

  • Workshop 1: A Course in Computational Thinking
    -- SECANT a year later
  • Workshop 2: Computational Thinking in the Health Sciences
    -- big data is changing the research method of science
  • Workshop 3: Computational Thinking in Physics
    -- bringing computation to the undergrad physics curriculum
  • Workshop 4: Computer Scientists on CS Education Issues
    -- bringing science awareness to computer science departments
  • Workshop 5: Curriculum Development
    -- some miscellaneous projects in the trenches
  • Workshop 6: The Next Generation of Scientists in the Workforce
    -- computational thinking as competitive advantage

Ancillary entries:


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 15, 2008 7:52 AM

Social Networks and the Changing Relationship Between Students and Faculty

One of my most senior colleagues has recently become enamored of Facebook. One of his college buddies started using it to share pictures, so my colleague created an account. Within minutes, he had a friend request -- from a student in one of his classes. And they kept coming... He now has dozen of friends, mostly undergrads at our school but also a few former students and current colleagues.

Earlier this week, he stopped me in the hall to report that during his class the previous hour, a student in the class had posted a message on his own Facebook page saying something to the effect, "I can't keep my eyes open. I have to go to sleep!" How does the prof know? Because they are Facebook friends, of course.

Did the student think twice about posting such a message during class? I doubt it. Was he so blinded by fatigue or boredom that he forgot the prof is his friend and so would see the message? I doubt it. Is he at all concerned in retrospect, or even just a little sheepish? I doubt it. This is standard operating procedure for a college set that opens the blinds on it life, day by day and moment by moment.

We live in a new world. Our students live much more public lives than most of us did, and today's network technology knocks down the well that separates Them from Us.

This can be a good thing. My colleague keeps his Facebook page open in the evenings, where his students can engage him in chat about course material and assignments. He figures that his office hours are now limited only by the time he spends in front of a monitor. Immediate interaction can make a huge difference to a student who is struggling with a database problem or a C syntax error. The prof does not mind this as an encroachment on his time or freedom; he can close the browser window and draw the blinds on office hours anytime he wants, and besides, he's hacking or reading on-line most of the time anyway!

I'm uncertain what the potential downsides of this new openness might be. There's always a risk that students can become too close to their professors, so a prof needs to take care to maintain some semblance of a professional connection. But the demystification of professors is probably a good thing, done right, because it enables connections and creates an environment more conducive to learning. I suppose one downside might be that students develop a sense of entitlement to Anytime, Anywhere access, and professors who can't or don't provide could be viewed negatively. This could poison the learning environment on both sides of the window. But it's also not a new potential problem. Just ask students about the instructors who are never in their offices for face-to-face meetings or who never answer e-mail.

I've not had experience with this transformation due to Facebook. I do have a page, created originally for much the same reason as my colleague's. I do have a small number of friends, including undergrads, former students, current colleagues, a grade-school buddy, and even my 60+ aunt. But I use Facebook sparingly, usually for a specific task, and rarely have my page open. I don't track the comments on my "wall", and I don't generally post on others'. It has been useful in one particular case, though, reconnecting me with a former student whose work I have mentioned here. That has been a real pleasure. (FYI, the link to his old site seems to be broken now.)

However, I do have limited experience with the newly transparent wall between me and my students, through blogs. It started when a few students -- not many -- found my blog and began to read it. Then I found the blogs of a few recent students and, increasingly, current students. I don't have a lot of time to read any blogs these days, but when I do read, I read some of theirs. Blogs are not quite as immediate as the Twitter-like chatter to be found in Facebook, but they are a surprisingly candid look into my students' lives and minds. Struggles they have with a particular class or instructor; personal trials at home; illness and financial woes -- all are common topics in the student blogs I read. So, too, are there joys and excitement and breakthroughs. Their posts enlighten me and humble me. Sometimes I feel as if I am privy to far too much, but mostly I think that the personal connection enriches my relationship both with individual students and with the collective student body. What I read certainly can keep me on a better path as I play the role of instructor or guide.

And, yes, I realize that there is a chance that the system can be gamed. Am I being played by a devious student? It's possible, but honestly, I don't think it's a big issue. The same students who will post in full view of their instructor that they want to sleep through class without shame or compunction are the ones who are blogging. There is a cultural ethic at play, a code by which these students live. I feel confident in assuming that their posts are authentic, absent evidence to the contrary for any given blogger.

(That said, I appreciate when students write entries that praise a course or a professor. Most students current students are circumspect enough not to name names, but there is always the possibility that they refer to my course. That hope can psyche me up some days.)

To be fair, we have to admit that the same possibility for gaming the system arises when professors blog. I suppose that I can say anything here in an effort to manipulate my students' perceptions or feelings. I might also post something like this, which reflects my take on a group of students, and risk affecting my relationship with those students. One of my close friends sent me e-mail soon after that post to raise just that concern.

For the same reasons I give the benefit of the doubt to student bloggers, I give myself the benefit of the doubt, and the same to the students who read this blog. To be honest, writing even the few entries I manage to write these days takes a lot of time and psychic energy. I have too little of either resource to spend them disingenuously. There is a certain ethic to blogging, and most of us who write do so for more important purposes than trying to manipulate a few students' perceptions. Likewise, I trust the students who read this blog to approach it with a mindset of understanding something about computer science and just maybe to get a little sense of what their Dear Old Professor tick.

I know that is the main reason I write -- to figure out how I tick, and maybe learn a few useful nuggets of wisdom along the way. Knowing that I do so in a world much more transparent than the one I inhabited as a CS student years ago is part of the attraction.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

October 09, 2008 5:53 PM

I Got Nowhere Else To Go

Some days, things go well, beyond expectation. Enjoy them! Today was one for me.

I've been thinking a lot about how students learn a new style of programming or a language that is quite different from their experience. Every class has its own personality, which includes interaction style, interest in Big Ideas, and curiosity. Last night it occurred to me that another important part of that personality is trust.

I was grading a quiz and suddenly felt a powerful personal connection to Gunnery Sergeant Foley from one of my favorite movies, An Officer and a Gentleman. There is a scene halfway through the film when he catches the protagonist, Zack Mayo, running an illegal contraband operation out of his barracks. The soldiers are in their room one afternoon when Foley walks in and declaims, "In every class, there's always one guy who thinks he's smarter than me. In this class, that's you, Mayo." He then dislodges a ceiling tile to reveal Mayo's stash of contraband and lets everyone know the jig is up.

Sergeant Foley breaking Mayo down

Beyond the occasional irrational desire I have to be Lou Gossett breaking the spirits of cocky kids and building them back up from scratch, while grading solutions to a particular exam problem I couldn't help but think, "In every class, there's always one guy who thinks he's smarter than me..." Some of the students seemed to be going out of their ways not to use the technique we had learned in class, which resulted in them writing complex, often incorrect code. More practically for them, they ended up writing more code than they needed, which spent extra time they didn't have the luxury of spending. I felt bad for them grade-wise, but also a little sad that they seemed to have missed out on the beautiful idea beyond the programming pattern they were not using.

(Don't worry, class. This irrational desire of mine is fleeting. I don't want your DOR. Quite the contrary; I am looking for ways help you succeed!)

Sometimes, I wonder if the problem is that students don't really trust me. Why should they? Sure, I'm the teacher, but they feel pretty good about their programming skills, and the patterns I show them may be different and complex enough that they'd rather trust their own skills than my claim that, say, mutual recursion makes life better. They'll learn that with enough experience, and then they may realize that they can trust me after all.

In many ways, though, a bigger part of the problem may be a failure of storytelling. On my side are the stories I tell to engage students in an idea and its use. To paraphrase Merlin Mann paraphrasing Cliff Atkinson, I need to tell a story that makes the students feel like an character with a problem they care about and then show how our new way of solving their problem -- their problem -- makes them winners in the end. I think I do a better job of this now than I did ten years ago in this course, but I always wonder how I can do better.

On their side is, perhaps, a failure of their own storytelling -- not just about bugs, as Guzdial writes, but about the problem domain itself, the data types at play, and the kind of problem they are solving. I suspect writing code over nested symbolic lists that represent programs is so different from the students' experience that many of them have a hard time getting a real sense of what is going on. As long as the domain and task remain completely abstract in the mind, the problems look almost like random markings on the page. Where to start? That disorientation may account for not starting in what seems to me to be the obvious location.

As a teacher, failures in their storytelling become failures in my storytelling. I need to reconsider how I communicate the "big picture" behind my course. Asking students to create their own examples is one micro-step in this direction. But I also need to think about the macro-level -- something like XP's notion of metaphor. That practice has proved to be a stumbling block for XP, and I expect that it will remain a challenge for me.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

October 05, 2008 8:47 PM

The Key Word is "Thinking"

The Chief Justice of the U.S. Supreme Court, John Roberts, spoke last week at Drake University, which merited an article in our local paper. Robert spoke on the history of technology in the law, and in particular on how the internet is changing in fundamental ways how the law is practiced. He likened the change to that created by the printing press, an analogy I use whenever I speak with parents and prospective CS majors.

The detective work that was important and rewarding when I was starting out is now almost ... irrelevant.

I wonder if this will have an effect on the kind of students who undertake study of the law, or the kind of lawyers who succeed in the profession. I don't imagine that it will affect the attractiveness of the law for a while, because I doubt that a desire to spend countless hours poring through legal journals is the primary motivator for most law students. Prestige and money are certainly more prominent, as is a desire to "make a difference". But who performs best way well change, as the circumstances under which lawyers work change. This sort of transformation is almost unavoidable when a new medium redefines even part of a discipline.

Roberts is perhaps concerned about this part of the change himself. Technology makes information more accessible, which means skill in finding it is no longer as valuable. How about skill at manipulating it? Being able to find information more readily can liberate practitioners, but only if they know what to do with it.

There's a lot of value in thinking outside the box. But the key word is "thinking". ... You cannot think effectively outside the box if you don't know where the box is.

I love that sentence! It's a nice complement to a phrase of Twyla Tharp's that I wrote about over three years ago: Before you can think out of the box, you have to start with a box. Tharp and Roberts are speaking of different boxes, and both are so right about both boxes.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 02, 2008 7:12 PM

The Opposite of "Don't Do That"

I had one of those agile moments on Wednesday. A colleague stopped by my office to share his good feeling. He had just come from a CS 2 lab. "I love it whenever I design a lab in which students work in pairs. There is such life in the lab!" He went on to explain the interactions within pairs but also across pairs; one group would hear what another was thinking or doing, and would ask about it. So much learning was in the air.

This reminded me of the old joke... Patient: "Doctor, it hurts when I do this." (Demonstrates.) "Can you help me?" Doctor: "Sure. Don't do that."

Of course, it reminded of the negative space around the joke. Patient: "Doctor, life is great when I do this." (Demonstrates.) "Can you help me?" Doctor: "Sure. Do more of that."

"But..." But. We faculty are creatures of habit, both in knowing and in doing. We just know we can't teach all of our material with students working in pairs, so we don't. I think we can, even when I don't follow my own advice. (Doctor, heal thyself!) We design the labs, so if we want students to work in pairs, we can have them work in pairs.

I've had one or two successful experiences with all pair programming all the time in closed labs. Back when we taught CS1 and CS2 in C++, in the mid-1990s, and I was doing our first-year courses a lot, I designed all of my labs for students working in pairs. I wish I could say I had bee visionary, but my motivation was extrinsic: I had 25-30 students in class and 15 computers in the lab. Students worked with different students every week, in pseudo-random assignments of my device.

My C++-based courses probably weren't very good -- I was relatively new to teaching, and we were using C++ after all -- and the paired programming in our lab sessions may have been one of the saving graces: students shared their perplexity and helped each other learn. When they worked on outside programming assignments for the course, they could call on a sturdy network of friends they had built in lab sessions. Without the pairs, I fear that our course would have worked well for very few students.

If something works well, let's try to understand the context in which it works, and then do it more often in those contexts. That's an agile sentiment, whether we apply it to pair programming or not. Whether we apply it at the university or in industry. Whether we apply it to software development or any other practice in which we find ourselves engaged.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 26, 2008 5:24 AM

An Experiment with Students Creating Examples

A couple of weeks ago, I mentioned that I might have my students create their own examples for a homework assignment. Among the possible benefits of this were:

  • helping the programmers to write down their understanding of the problem in a concrete way early in the process
  • giving the programmers a way of to ask concrete questions early in the process -- and reason to ask the questions
  • helping the programmers know how much code to write and when to stop

I tried this and, as usual, learned as much or more than my students.

Getting students to think concretely about their tasks is tough, but asking them to write examples seemed to help. Most of them made a pretty good effort and so fleshed out what the one- or two-line text description I gave them meant. I saw lots of the normal cases for each task but also examples at the boundaries of the spec (What if the list is empty?) and on the types of arguments (What if the user passes an integer when the procedure asks for a list? What if the user passes -1 when the procedure expects a non-negative integer?) In class, before the assignment was due, we were able to discuss how much type checking we want our procedures to do, if any, in a language like Scheme without manifest types. Similarly, should we write examples with the wrong number of arguments, which result in an error?

I noticed that most students' examples contrasted cases with different inputs to a procedure, but that few thought about different kinds of output from the procedure. Can filter return an empty list? Well, sure; can you show me an example? I'll know next time to talk to students about this and have them think more broadly about their specs.

Requiring examples part-way through the assignment did motivate questions earlier than usual. On previous assignments, if I received any questions at all, they tended to arrive in my mailbox the night before the programs were due. That was still the case, but now the deadline was halfway through the assignment period, before they had written any code. And most of the class seemed happy to comply with my request that they write their examples before they wrote their code. (They are rarely in a hurry to write their code!)

Did having their own examples in hand help the students know how much code to write and when to stop? Would examples provided by me have helped as much? I don't know, but I guess 'yes' to both. Hmm. I didn't ask students about this! Next time...

Seeing their examples early helped me as much writing their examples early helped them. They got valuable feedback, yes, but so did I. I learned a bit of what they were thinking about the specific problems at hand, but I also learned a bit of what they think about more generally when faced with a programming task.

My first attempt at this also gave me some insight about how to describe the idea of writing examples better, and why it's worth the effort. The examples should clarify the textual description of the problem. They aren't about testing. They may be useful as tests later, but they probably aren't sufficient. (They approximate are a form of black box testing, but not white box testing.) As clarifiers, one might take an extreme position: If the textual description of the problem were missing, would the examples be enough for us to know what procedure to write? At this extreme, examples with the wrong number and type of arguments might be essential; in the more conventional role of clarifying the spec, those examples are unnecessary.

One thing that intrigued me after I made this assignment is that students might use their examples as the source material for test-driven development. (There's that word again.) I doubt many students consider this on their own; a few have an inclination to write and test code in close temporal proximity, but TDD isn't a natural outgrowth of that for most of them. In any case, we are currently learning a pattern-driven style of programming, so they have a pretty good idea of what their simplest piece of code will look like. There is a nice connection, though. Structural recursion relies on mimicking the structure of the input data, and that data definition also gives the programmer an idea about the kinds of input for which she should have examples. That s-list is either an empty list or a pair...

I'm probably reinventing a lot of wheels that the crew behind How to Design Programs smoothed out long ago. But I feel like I'm learning something useful along the way.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 23, 2008 6:53 PM

Shut Up. Better Yet, Ask a Question.

On the way out of class today, I ran into the colleague who teaches in the room after me. I apologized for being slow to get out of the room and told him that I had talked more than usual today. From the looks on the faces of my students, I gathered that they needed a bit more. What they really needed was more time with same material. Most of all, they needed me to slow down -- rather than cover more material, they needed a chance to think more about what they had just learned. My way of doing that was to keep talking about the current example.

I told my colleague that there is probably a pedagogical pattern called Shut Up. And if not, then maybe there should be.

He said that the real pattern is Ask a Question.

I bowed down to him.

We talked a bit more, about how we both desire to use the Ask a Question pattern more often. We don't, out of habit and out of convenience. Professors lecture. It's what we do. The easiest thing to do is almost always: just keep talking, saying what I had planned to say.

I give myself some credit for how I ended class today. At the very least, I realized that I should not introduce new material. I was able to Let the Plan Go [1]

Better than sticking to a plan that is off track for my students is to keep talking, but about same stuff, only in a different way. This can sometimes be good. It gives me a chance to show students another side of the same idea, so that they might understand the idea better by seeing it from different perspectives.

Is Shut Up better than that? Sometimes. There are times when students just need... time -- time for the idea to sink in, time to process.

Is Ask a Question better still? Yes, in most cases. Even if I show students an idea, rather than telling them something, they remain largely passive in the process. Asking a question engages them in the idea. More and different parts of their brain can go to work. Most everything we know about how people learn says that this is A Good Thing.

Now, I do give myself a little credit here, too. I know about the Active Student pattern [2] and have changed my habits slowly over time. I try to toss in a question for students every now and then, if only to shut myself up for a while. But my holding pattern today probably didn't use enough questions. I was under time pressure (class is almost over!) and didn't have the presence of mind to turn the last few minutes into an exercise. I hope to do better next time.

~~~~~

[1] You can read the Let the Plan Go pattern in Seminars, an ambitious pattern language by Astrid Fricke and Markus Völter.

[2] The Active Student pattern is documented in Joe Bergin's paper "Some Pedagogical Patterns". There is a lot of good stuff in this one!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

September 23, 2008 6:47 AM

From a Champion's Mind

I'm a big tennis fan. I like to play and would love to play more, though I've never played well. But I also like to watch tennis -- it is a game of athleticism and strategy. The players are often colorful, yet many of the greatest have been quiet, classy, and respectful of the game. I confess a penchant for the colorful players; Jimmy Connors is my favorite player of all time, and in the 1990s my favorite was Andre Agassi.

Agassi's chief rival throughout his career was one of the game's all-time greats, Pete Sampras. Sampras won a record fourteen Grand Slam titles (a record under assault by the remarkable Roger Federer) and finished six consecutive years as the top-ranked player in the world (a record that no one is likely to break any time soon). He was also one of the quiet, respectful players, much more like me than the loud Agassi, who early in his career seemed to thrive on challenging authority and crossing boundaries just for the attention.

Sampras recently published a tennis memoir, A Champion's Mind, which I gladly read -- a rare treat these days, reading a book purely for pleasure. But even while reading for pleasure I could not help noticing parallels to my professional interest in software development and teaching. I saw in Sampras's experience some lessons that that we in CS have also learned. Here are a few.

Teaching and Humility

After Sampras had made his mark as a great player, one of his first coaches liked to be known as one of the coaches who helped make Sampras the player he was. Sampras gave that coach his due, and gave the two men who coached him for most of his pro career a huge amount of credit for honing specific elements of his game and strategy. But without sounding arrogant, he also was clear that no coach "made" him. He had a certain amount of native talent, and he was also born with the kind of personality that drove him to excel. Sampras would likely have been one of the all-time greats even if he had had different coaches in his youth, and even as a pro.

Great performers have what it takes to succeed. It is rare for a teacher to help create greatness in a student. What made Sampras's pro coaches so great themselves is not that they built Sampras but that they were able to identify the one or two things that he needed at that point in his career and helped him work on those parts of his game -- or his mind. Otherwise, they let the drive within him push him forward.

As a teacher, I try figure out what students need and help them find that. It's tough to do when teaching a class of twenty-five students, because so much of the teaching is done with the group and so cannot be tailored to the individual as much as I might like and as much as each might need. But when mentoring students, whether grad students or undergrads, a dose of humility is in order. As I think back to the very best of my past students, I realize that I was most successful when I helped them get past roadblocks or to remove some source of friction in their thinking or their doing. Their energy often energized me, and I fed off of the relationship as much as they did.

Agile Moments

The secret of greatness is working hard day in and day out. Sampras grew as a player because he had to in order to achieve his goal of finishing six straight years as #1. And the only way to do that was to add value to his game every day. This seems consistent with agile developers' emphasis on adding value to their programs every day, through small steps and daily builds. Being out there every day also makes it possible to get feedback more frequently and so make the next day's work potentially more valuable. For some reason, Sampras's comments on a commitment to being in the arena day in and day out reminded me of one of Kent Beck's early bits of writing on XP, in which he proclaimed that, and the end of the day, if you hadn't produced some code, you probably had not given your customer any value. I think Sampras felt similarly.

Finally, this paragraph from a man who never changed the model of racket he used throughout his career, even as technology made it possible for lesser players to serve bigger and hit more powerful ground strokes. Here he speaks of the court on which his legend grew beyond ordinary proportion, Centre Court at the All England Club:

I enjoyed the relative "softness" of the court; it was terrific to feel the sod give gently beneath my feet with every step. I felt catlike out there, like I was on a soft play mat where I could do as I pleased without worry, fear, or excessive wear and tear. Centre Court always made me feel connected to my craft, and the sophisticated British crowd enhanced that feeling. It was a pleasure to play before them, and they inspired me to play my best. Wimbledon is a shrine, and it was always a joy to perform there.

Whatever else the agile crowd is about, feeling connected to the craft of making software is at its heart. I like to use tools that give gently beneath my feet, that let me make progress without worry and fear. Even ordinary craftsmen such as I appreciate these feelings.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 19, 2008 5:12 PM

Design Creates People, Not Things

The latest issue of ACM's on-line pub Ubiquity consists of Chauncey Bell's My Problem with Design, an article that first appeared on his blog a year ago. I almost stopped reading it early on, distracted by other things and not enamored with its wordiness. (I'm one to talk about another writer's wordiness!) I'm glad I read the whole article, because Bell has an inspiring take on design for a world that has redefined the word from its classic sense. He echoes a common theme of the software patterns and software craftsmanship crowd, that in separating design from the other tasks involved in making an artifact we diminish the concept of design, and ultimately we diminish the quality of the artifact thus made.

But I was especially struck by these words:

The distinctive character of the designer shapes each design that affects us, and at the same time the designer is shaped by his/her inventions. Successful designs shape those for whom they are designed. The designs alter people's worlds, how they understand those worlds, and the character and possibilities of inhabiting those worlds. ...

Most of our contemporaries tell a different story about designing, in which designers fashion or craft artifacts (including "information") that others "use." One reason that we talk about it this way, I think, is that it can be frightening to contemplate the actual consequences of our actions. Do we dare speak a story in which, in the process of designing structures in which others live, we are designing them, their possibilities, what they attend to, the choices they will make, and so forth?

(The passage I clipped gives the networked computer as the signature example of our era.)

Successful designs shape those for whom they are designed. In designing structures for people, we design them, their possibilities.

I wonder how often we who make software think this sobering thought. How often do we simply string characters together without considering that our product might -- should?! -- change the lives of its users? My experience with software written by small, independent developers for the Mac leads me to think that at least a few programmers believe they are doing something more than "just" cutting code to make a buck.

I have had similar feelings about tools built for the agile world. Even if Ward and Kent were only scratching their own itches when they built their first unit-testing framework in Smalltalk, something tells me they knew they were doing more than "making a tool"; they were changing how they could write Smalltalk. And I believe that Kent and Erich knew that JUnit would redefine the world of the developers who adopted it.

What about educators? I wonder how often we who "design curriculum" think this sobering thought. Our students should become new people after taking even one of our courses. If they don't, then the course wasn't part of their education; it's just a line on their transcripts. How sad. After four years in a degree programs, our students should see and want possibilities that were beyond their ken at the start.

I've been fortunate in my years to come to know many CS educators for whom designing curriculum is more than writing a syllabus and showing up 40 times in a semester. Most educators care much more than that, of course, or they would probably be in industry. (Just showing up out there pays better than just showing up around here, if you can hold the gig.) But even if we care, do we really think all the time about how our courses are creating people, not just degree programs? And even if we think this way in some abstract way, how often do we let it seep down into our daily actions. That's tough. A lot of us are trying.

I know there's nothing new here. Way back, I wrote another entry on the riff that "design, well done, satisfies needs users didn't know they had". Yet it's probably worth reminding ourselves about this every so often, and to keep in mind that what we are doing today, right now, is probably a form of design. Whose world and possibilities are we defining?

This thought fits nicely with another theme among some CS educators these days, context. We should design in context: in the context of implementation and the other acts inherent in making something, yes, but also in the context of our ultimate community of users. Educators such as Owen Astrachan are trying help us think about our computing in the context of problems that matter to people outside of the CS building. Others, such as Mark Guzdial, have been preaching computing in context for a while now. I write occasionally on this topic here. If we think about the context of our students, as we will if we think of design as shaping people, then putting our courses and curricula into context becomes the natural next step.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

September 16, 2008 9:43 PM

More on the Nature of Computer Science

Another entry generated from a thread on a mailing list...

A recent thread on the SIGCSE list began as a discussion of how programming language constructs are abstractions of the underlying hardware, and what that means for how students understand the code they write. For example, this snippet of Java:

    int x = 1;
    while (x > 0)
        x++;

does not result in an infinite, because Java ints are not integers.

This is one of many examples that remind us how important it is to study computer organization and architecture, and more generally to learn that abstractions are never 100% faithful to the details they hide. If they were, they would not be abstractions! A few good abstractions make all the difference in how we work, but -- much like metaphor -- we have to pay attention to what happens at their edges.

Eventually, the thread devolved toward a standard old discussion on this list, "What is Computer Science?" I conjecture that every mailing list, news group, and bulletin board has a topic that is its "fixed point", the topic toward which every conversation ultimately leads if left to proceed long enough, unfettered by an external force. Just about every Usenet newsgroup in which I participated during the late 1980s and early 1990s had one, and the SIGCSE list does, too. It is, "What is Computer Science?"

This question matters deeply to many people, who believe that graduates of CS programs have a particular role to play in the world. Some think that the primary job of undergraduate CS programs is to produce software engineers. If CS is really engineering (or at least should be thought of that way for practical reasons), then the courses we teach and the curricula we design should have specific outcomes, teach specific content, and imbue in students the mindset and methodology of an engineer. If CS is some sort of liberal art, then our courses and curricula will look quite different.

Much of this new thread was unremarkable if only because it all sounded so familiar to me. One group of people argued that CS is engineering, and another argued that it was more than engineering, perhaps even a science. I must have been in an ornery mood, because one poster's assertion provoked me to jump into the fray with a few remarks. He claimed that CS was not a science, because it is not a "natural science", and that it is not a natural science because the object of its study is not a natural phenomenon:

I don't believe that I have ever seen a general purpose, stored-program computing device that occurs in nature... unless we want to claim that humans are examples of such devices.

This seems like such a misguided view of computer science, but many people hold it. I'm not surprised that non-computer scientists believe this, but I am still surprised to learn that someone in our discipline does, too. Different people have different backgrounds and experiences, and I guess those differences can lead people to widely diverging viewpoints.

Computer science does not study the digital computer. Dijkstra told us so a long time ago, and if we didn't believe him then, we should now, with the advent of ideas such as quantum computing and biological computing.

Computer science is about processes that transform information. I see many naturally-occurring processes in the world. It appears now that life is the result of an information process, implement in the form of DNA. Chemical processes involve information as well as matter. And some physicists now believe that the universe as we experience it is a projection of two-dimensional information embodied in the interaction of matter and energy.

When we speak of these disciplines, we are saying more than that computer scientists use their tool -- a general-purpose computation machine -- to help biologists, chemists, and physicists do science in their areas. We are talking about a more general view of processes and information, how they behave in theory and under resource constraints. Certainly, computer scientists use their tools to help practitioners of other disciplines do their jobs differently. But perhaps more important, computer scientists seek to unify our understanding of processes and information across the many disciplines in which they occur, in a way that sheds light on how information processing works in each discipline. We are still at the advent of the cycle feeding back what we learn from computing into the other disciplines, but many believe that this is where the greatest value of computer science ultimately lies. This means that computer science is wonderful not only because we help others by giving them tools but also because we are studying something important in its own right.

If we broaden our definition of "naturally occurring" to include social phenomena in large. complex systems that were not designed by anyone in particular, then the social sciences give rise to a whole new class of information processes. Economic markets, political systems, and influence networks all manifest processes that manipulate and communicate information. How do these processes work? Are they bound by the same laws as physical information processing? These are insanely interesting questions, whose answers will help us to understand the world we live in so much better than we do now. Again, study of these processes from the perspective of computer science is only just beginning, but we have to start somewhere. Fortunately, some scientists are taking the first steps.

I believe everything I've said here today, but that doesn't mean that I believe that CS is only science. Much of what we do in CS is engineering: of hardware systems, of software systems, of larger systems in which the manipulation of information is but one component. Much of what we do is mathematics: finding patterns, constructing abstractions, and following the implications of our constructions within a formal system. That doesn't mean computer science is not also science. Some people think we use the scientific method only as a tool to study engineered artifacts, but I think that they are missing the big picture of what CS is.

The fact that people within our discipline still grapple with this sense of uncertainty about its fundamental nature does not disconcert me. We are a young discipline and unlike any of the disciplines that came before (which are themselves human constructs in trying to classify knowledge of the world). We do not need to hide from this unique character, but should embrace it. As Peter Denning has written over the years Is computer science science? Engineering? Mathematics? The answer need not be one of the above. From different perspectives, it can be all three.

Of course, we are left with the question of what it is like for a discipline to comprise all three. Denning's Rebooting Computing summit will bring together people who have been thinking about this conundrum in an effort to make progress, or chart a course. On the CS education front, we need to think deeply about the implications of CS's split personality for the design of our curricula. Owen Astrachan is working on innovating the image of CS in the university by turning our view outward again to the role of computer science in understanding a world bigger than the insides of our computers or compilers. Both of these projects are funded by the NSF, which seems to appreciate the possibilities.

I can't think about the relationship between computer science and natural science with thinking of Herb Simon's seminal Sciences of the Artificial. I don't know whether reading it would change enough minds, but it affected deeply how I think about complex systems, intentionality, and science.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 12, 2008 5:43 PM

Creating Examples and Writing Programs

On the PLT Scheme mailing list, someone asked why the authors of How to Design Programs do not provide unit tests for their exercises. The questioner could understand not giving solutions, but why not give the examples that the students could use to guide their thinking. A list member who is not an HtDP c-author speculated that if the authors provided unit tests then students would not bother to implement their own.

Co-author Matthias Felleisen responded "Yes" and added this stronger assertion:

I have come to believe that being able to make up your own examples (inputs, outputs) is the critical step in solving most problems.

Writing examples is one of the essential elements of the "design recipe" approach on which Felleisen et al. base How to Design Programs. The idea itself isn't new, as I'm sure the book's authors will tell you. Some CS teachers have been requiring students to write test cases or test plans for many years, and the practice is similar to what some engineers learn to do from the start of their education. Heck, test-driven design has gone from being the latest rage in agile development to an accepted (if too infrequently practiced) part of creating software.

What HtDP and TDD do is to remind us all of the importance of the practice and to make it an essential step in the student's or developer's programming process.

What struck me by Matthias's response is that making up examples is the critical step in writing code. It is certainly reasonable, for so many reasons, among them:

  • It forces the programmer to write down her understanding of the problem in a concrete way early in the process. Concrete understanding is always preferable to the fuzzy-minded understanding that follows reading a problem statement. Besides, writing a program requires that level of understanding.

  • Having examples in hand gives the programmer a way of talking to the client or teacher to see if her understanding matches that of the person who "owns" the problem.

  • It gives the programmer a way of knowing how much code to write and so when to stop. Most students can use that guidance.

I usually give my students several examples as a part of specifying problems and ask them to write a few of their own. Most don't do much on their own and, uncharacteristically, I don't hold them accountable often enough. My next programming assignment may look different from the previous ones; I have an idea of how to sneak this little bit of design recipe thinking into the process.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 09, 2008 6:24 PM

Language, Patterns, and Blogging

My semester has started with a busy bang, complicated beyond usual by a colleague's family emergency, which has me teaching an extra course until he returns. The good news is that my own course is programming languages, so I am getting to think about fun stuff at least a couple of days a week.

Teaching Scheme to a typical mix of eager, indifferent, and skeptical students brought to mind a blog entry I read recently on Fluent Builders in Java. This really is a neat little design pattern for Java or C++ -- a way to make those code written in these languages look and feel so much better to the reader. But looking at the simple example:

Car car = Car.builder()
   .year(2007)
   .make("Toyota")
   .model("Camry")
   .color("blue")
   .build();

... can't help me think about the old snark that we are reinventing Smalltalk and Lisp one feature at a time. A language extension here, a design pattern there, and pretty soon you have the language people want to use. Once again, I am turning into an old curmudgeon before my time.

As the author points out in a comment, Ruby gives us an more convenient way to fake named parameters: passing a hash of name/value pairs to the constructor. This is a much cleaner hack for programmers, because we don't have to do anything special; hashes are primitives. From the perspective of teaching Programming Languages this semester, what like most about the Ruby example is that it implements the named parameters in data, not code. The duality of data and program is one of those Big Ideas that all CS students should grok before they leave us, and now I have a way to talk about the trade-off using Java, Scheme, and an idiomatic construction in Ruby, a language gaining steam in industry.

Of course, we know that Scheme programmers don't need patterns... This topic came up in a recent thread on the PLT Scheme mailing list. Actually, the Scheme guys gave a reasonably balanced answer, in the context of a question that implied an unnecessary insertion of pattern-talk into Scheme programming. How would a Scheme programmer solve the problem that gives rise to fluent builders? Likely, write a macro: extend the language with new syntax that permits named parameters. This is the "pattern as language construct" mentality that extensible syntax allows. (But this leaves other questions unanswered, including: When is it worth the effort to use named parameters in this way? What trade-offs do we face among various ways to implement the macro?)

Finally, thinking ahead to next semester's compilers class, I can't help but think of ways to use this example to illustrate ideas we'll discuss there. A compiler can look for opportunities to optimize the cascaded message send shown above into a single function call. A code generator could produce a fluent builder for any given class. The latter would allow a programmer to use a fluent builder without the tedium of writing boilerplate code, and the former would produce efficient run-time code while allowing the programmer to write code in a clear and convenient way. See a problem; fix it. Sometimes that means creating a new tool.

Sometimes I wonder whether it is worth blogging ideas as simple as these. What's the value? I have a new piece of evidence in favor. Back in May 2007, I wrote several entries about a paper on the psychology of security. It was so salient to me for a while that I ended up suggesting to a colleague that he might use the paper in his capstone course. Sixteen months later, it is the very same colleague's capstone course that I find myself covering temporarily, and it just so happens that this week the students are discussing... Schneier's paper. Re-reading my own blog entries has proven invaluable in reconnecting with the ideas that were fresh back then. (But did I re-read Schneier's paper?)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

August 27, 2008 12:25 PM

What Grades Mean

My younger daughter entered seventh grade this year, and at the orientation session last week the teachers made a point of saying that timeliness matters. If a student turns in late work, they will be "docked". My mind ended up wandering away as I thought about what this means for her grade. If she ends up with a B, what does that say about her mastery of the material? About her timeliness?

Perhaps I was primed for this daydream by a conversation I had had recently with a colleague who teaches one of our CS1 sections. Traditionally, he has had a very lax policy on late work: get it done, even late, and he would grade it straight up. His thinking was that this would encourage students to stick with assignments and get the practice they need. In past years, this policy has worked all right for him, but in the last year or so he has noticed more students putting off more assignments, many students turning in several or all of their assignments at the end of the semester. Not surprisingly, these students do poorly on the exams for lack of practice and so do poorly in course overall.

He and I contrasted his policy with mine, which is that late work is not accepted for grading. I'm always willing to look at a student program after the deadline, but it will not count for credit. This is one of the few ways in which I draw a hard line with students, but I find that it encourages students to take assignments seriously and to get practice regularly throughout the semester.

Until I heard my daughters' teachers talk about their policy, I'm not sure I had realized quite so clearly: My late work policy conflates mastery of content with professional work habits. A student can learn everything I want him to learn and more, yet earn a low grade by not submitting assignment on time.

To be honest, that's probably not a problem. In our current system, it is not entirely clear what a grade means anyway. Across universities, across departments at the same university, and even across faculty within the same department, grades can signify very different results. Conflating the evaluations of knowledge and behavior is only one source of variation, and almost certainly not the most significant.

Employers who hire our graduates want employees who know their discipline and who deliver results in a professional many. Still, I can't help but think what it would be like to offer two grades for a course, one for content and one for all that other stuff: timeliness, teamwork, neatness, etc. Instructor: "Johnny, you get a B for your understanding of operating systems, and a D for behavior, because you don't color within the lines." Employer: "We really need someone with the right professional skills for this position; let's teach him what he needs to know after he gets here."

Increasingly, I am drawn to a competency-based scheme for grading what students know. West and Rostal have been advocating this idea for a while, as part of a larger overhaul of CS education. It takes some work do right, but the effect on what we expect of our students might be worth it. Unfortunately, within the broader university culture of grades and effort and time-delimited courses carved out of a discipline's body of knowledge, moving in this direction creates logistic costs that may be larger than the pedagogical ones.

In any case, I've been thinking of ways I might change my grading scheme. I'm not likely to change the "no late work" policy, at least not for upper-division courses, and to be honest I find that very few students have a problem getting their work in on time in face of the policy. (Whether the work is complete is another matter...) Still, I might consider changing how the homework grade figures into the overall grade. Perhaps instead of counting homework as 30% of the grade, I could count it for "up to 30%" and let the student select the percentage. Students who would rather not bother with falderol of assignment requirements could stake more or all of their grade on exams; students who worry about exams could stick with 30%. Perhaps having that be their choice and not mine would motivate them even more to make a good faith effort at completing the entire assignment on time.

I suppose that my real concern in all this thinking is with my seventh-grader. She, my wife, and I already pay close attention to her work behavior, trying help her develop good habits. She's already a conscientious student who just needs to learn how to manage her own time. We also pay close attention to her understanding of the content in her classes, but her assignment and test grades are a big part of how we track that progress. As the grades she receives begin to include both elements, we'll want to pay closer attention to her understanding of the material in other ways. I guess I'm in the same position as the employers who hire my students now!


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 26, 2008 3:58 PM

The Start of the Semester

I taught my first session of Programming Languages today, so the semester is officially underway. Right now, my mind is a swirl of Scheme, closures, continuations, MapReduce, and lazy evaluation. I've been teaching this course for a dozen years based on functional programming (a style new to our students at this point) and writing interpreters in Scheme. This makes me very comfortable with the material. Over the years I have watched ideas work their way from the niche of PL courses into mainstream languages. The resurgence of scripting languages has been both a result of this change and a trigger. The discussion of true closures in languages such as Ruby and Java is one example.

This evolution is fun to watch, even if it moves haltingly and perhaps slower than I'd prefer. In order to keep my course current, I need to incorporate some of these changes into my course. This time around, I find myself thinking about what ideas beyond the "edge" of practical languages I should highlight in my course. I'd like for my students to learn about some of the coolest ideas that will be appearing in their professional practice in the near future. For some reason, lazy evaluation seems ripe for deeper consideration. Working it into my class more significantly will be a project for me this semester.

Delving headlong into a new semester's teaching makes Jorge Cham's recent cartoon seem all the more true:

How Professors Spend Their Time -- Jorge Cham

For faculty at a "teaching university", the numbers are often skewed even further. Of course, I am an administrator now, so I teach but one course a semester, not three. Yet the feeling is the same, and the desire to spend more time on real CS -- teaching and research -- is just as strong. Maybe I can add a few hours to each day?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 20, 2008 2:19 PM

Stalking the Wily Misconception

Recently, someone sent me a link to Clifford Stoll's TED talk from February 2006, and yesterday I finally watched. Actually, I listened more than I watched, for two reasons. First, because I was multitasking in several other windows, as I always am at the machine. Second, because Stoll's manic style jumping around the stage isn't much to my liking.

As a university professor and a parent, I enjoyed the talk for its message about science and education. It's worth listening to simply for the epigram he gives in the first minute or so, about science, engineering, and technology, and for the quote he recites to close the talk. (Academic buildings have some of the coolest quotes engraved right on their walls.) But the real meat of the talk, which doesn't start until midway through, is the point.

Prodded by schoolteachers to whom he was talking about science in the schools, Stoll decided that he should put his money where his mouth is: he became a science teacher. Not just giving a guest lecture at a high school, but teaching a real junior-high science class four days a week. He doesn't do the "turn to Chapter 7 and do all the odd problems" kind of teaching either, but real physics. For example, his students measure the speed of light. They may be off by 25%, but they measured the speed of light, using experiments they helped design and real tools. This isn't the baking soda volcano, folks. Good stuff. And I'll bet that junior-high kids love his style; he's much better suited for that audience than I!

One remark irked me, even if he didn't mean it the way I heard it. At about 1:38, he makes a short little riff on his belief that computers don't belong in schools. "No! Keep them out of schools", he says.

In one sense, he is right. Educators, school administrators, and school boards have made "integrating technology" so big a deal that computers are put into classrooms for their own sake. They become devices for delivering lame games and ineffective simulations. We teach Apple Keynote, and students think they have learned "computers" -- and so do most teachers and parents. When we consider what "computers in schools" means to most people, we probably should keep kids away from them, or at least cut back their use.

At first, I thought I was irked at Stoll for saying this, but now I realize that I should be irked at my profession for not having done a better job both educating everyone about what computers really mean for education and producing the tools that capitalize on this opportunity.

Once again I am shamed by Alan Kay's vision. The teachers working with Alan also have their students do real experiments, too, such as measuring the speed of gravity. Then they use computers to build executable models that help students to formalize the mathematics for describing the phenomenon. Programming is one of their tools.

Imagine saying that we should keep pencils and paper out of our schools, returning to the days of chalk slates. People would laugh, scoff, and revolt. Saying we should keep computers out of schools should elicit the same kind of response. And not because kids wouldn't have access to e-mail, the web, and GarageBand.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 15, 2008 2:35 PM

Less, Sooner

Fall semester is just around the corner. Students will begin to arrive on campus next week, and classes start a week from Monday. I haven't been able to spend much time on my class yet and am looking forward to next week, when I can.

What I have been doing is clearing a backlog of to-dos from the summer and handling standing tasks that come with the start of a new semester and especially a new academic year. This means managing several different to-do lists, crossing priorities, and generally trying to get things done.

As I look at this mound of things to do I can't help being reminded of something Jeff Patton blogged a month or so ago: two secrets of success in software development, courtesy of agile methods pioneer Jim Highsmith: start sooner, and do less.

Time ain't some magical quantity that I can conjure out of the air. It is finite, fixed, and flowing relentlessly by. If I can't seem to get done on time, I need to start sooner. If I can't seem to get it all done, I need to do less. Nifty procedures and good tools can help only so much.

I need to keep this in mind every day of the year.

Oh, and to you students out there: You may not be able to do less work in my class, but you can start sooner. You may have said so yourself at the end of last semester. Heck, you may even want to do more, like read the book...


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 08, 2008 4:11 PM

SIGCSE Day 2 -- This and That

[A transcript of the SIGCSE 2008 conference: Table of Contents]

(Okay, so I am over four months behind posting my last couple of entries from SIGCSE. Two things I've read in the last week or so jolted my memory about one of these items. I'll risk that they are longer of much interest and try to finish off my SIGCSE reports before classes start.)

A Discipline, Not Just a Job

During his talk, I think, Owen Astrachan said:

Don't talk about where the jobs are. We do not need to kowtow to the market. CS is ideas, a discipline.

We do, of course, need to keep in mind that the first motivation for many of our students is to get a job. But Owen is right. To the extent that we "sell" anything, let's sell that CS is a beautiful and powerful set of ideas. We can broaden the minds of our job-seeking students -- and also attract thinking students who are looking for powerful ideas.

When Good Students Are Too Good

Rich Pattis tossed out an apparently old saw I had never heard: Don't give your spec to the best programmer in the room. She will make it work, even if the spec isn't what you want and doesn't make sense. Give it to a mediocre programmer. If the spec is bad, he will fail and come back with questions.

This applies to homework assignments, too. Good students can make anything work, and most will. That good students solved your problem is not evidence of a well-written spec.

Context Complicates

I've talked a lot here about giving students problems in context, whether in the context of large projects or in the context of "real" problems. As I was listening to Marissa Mayer's talk and lunchtable conversation, I was reminded that context complicates matters, for both teacher and students. We have to be careful when designing instruction to be sure that students are able to attend to what we want them to learn, and not be constantly distracted by details in the backstory. Otherwise, a task being in context hurts more than it helps.

The solution: Start with problems in context, then simplify to a model that captures the essence of the context and eliminates unnecessary complexity and distraction. Joe Bergin has probably already written a pedagogical pattern for this, but I don't see it after a quick glance at some of his papers. I've heard teachers like Owen, Nick Parlante, and Julie Zelensky talk about this problem in a variety of settings, and they have some neat approaches to solving it.

Overshooting Your Mark in the Classroom

It is easy for teachers to dream bigger than they can deliver when they lose touch with the reality of teaching a course. I see this all the time when people talk about first-year CS courses -- including myself. In my piece on the Nifty Assignments session, I expressed disappointment that one of the assignments had a write-up of four pages and suggested that I might be able to get away with giving students only the motivating story and a five-line assignment statement. Right. It is more likely that the assignment's creator knows what he is doing from the experience of actually using the assignment in class. From the easy chairs of the Oregon Convention Center, everything looks easier. (I call this the Jeopardy! Effect.)

The risk of overshooting is even bigger when the instructor has not been in the trenches, ever or even for a long while. Mark Guzdial recently told the story of Richard Feynman's freshman physics course, which is a classic example of this phenomenon. Feynman wrote a great set of lectures, but they don't really work as a freshman text, except perhaps with the most elite students.

I recently ran across a link to a new CS1 textbook for C++ straight from Bjarne Stroustrup himself. Stroustrup has moved from industry to academia and has had the opportunity to develop a new course for freshmen. "We need to improve the education of our software developers," he says. When one of my more acerbic colleagues saw this, he response was sharp and fast: "Gee, that quick! Seems those of us in 'academia' don't catch on as well as the newbies."

For all I know, Stroustrup's text will be just what every school that wants to teach C++ in CS1 needs, but I am also skeptical. A lot of smart guys with extensive teaching experience -- several of them my friends -- have been working on this problem for a long time, and it's hard. I look forward to seeing a copy of the book and to hearing how it works for the early adopters.

Joe, is there a pedagogical pattern called "In the Trenches"? If not, there should be. Let's write it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 31, 2008 12:12 PM

Small Programs and Limited Language

A recent entry mentioned that one advantage of short source code for beginners is a smaller space for errors. If a student writes only three lines of code, then any error in the program is probably on one of those three lines. That's better than looking for errors in a 100-line program, at least when the programmer is learning.

This assertion may seem like an oversimplification. What if the students writes a bunch of three-line procedures that call one another? Couldn't an error arise out of the interaction of multiple procedures, and thus lie far from the point at which it is discovered? Sure, but that is usually only a problem if the student doesn't know that each three-line procedure works. If we develop the habit of testing each small piece of code well, or even reasoning formally about its behavior, then we can have confidence in the individual pieces, which focuses the search for an error in the short piece of code that calls them.

This is, of course, one of the motivations behind the agile practices of taking small steps and creating tests for each piece of code as we go along. It is also why programming in a scripting language can help novices. The language provides powerful constructs, which allow the novice programmer to say a lot in a small amount of code. We can trust the language constructs to work correctly, and so focus our search for errors in the small bit of code.

Even still, it's not always as simple as it sounds. I am reminded of an article on a new course proposed by Matthias Felleisen, in which he argues for the use of a limited proof language. Even when we think that a 'real' language is small enough to limit the scope of errors students can make, we are usually surprised. Felleisen comments on the Teach Scheme! experience:

... we used to give students the "simple language of first-order Lisp" and the code we saw was brutal. Students come up with the worst possible solution that you can imagine, even if you take this sentence into account in your predictions.

This led the Teach Scheme! team to create a sequence of language levels that expose students to increasingly richer sets of ideas and primitives, culminating in the complete language. This idea has also been in the Java world, via Dr. Java. Another benefit of using limited teaching languages is that the interpreter or compiler can provide much more specific feedback to students at each level because it, too, can take advantage of the smaller space of possible errors.

Felleisen does not limit the idea of limited language to the programming language. He writes of carefully introducing students to the vocabulary we use to talk about programming:

Freshmen are extremely limited in their vocabulary and "big words" (such as 'specification' and 'implementation') seem to intimidate them. We introduce them slowly and back off often.

When I read this a couple of weeks ago, it troubled me a bit. Not because I disagree with what Felleisen says, but because it seems to conflict with something else I believe and blogged about couple of weeks ago: speak to students in real language, and help the students grow into the language. I have had good experience with children, including my own, when talking about the world in natural language. What makes the experience of our students different.

As I write this, I am less concerned that these conflict. First, Felleisen mentions one feature of the CS1 experience that distinguishes it from my kids' experience growing up: fear. Children don't spend a lot of their time afraid of the world; they are curious and want to know more. They are knowledge sponges. CS1 students come out of a school system that tends to inculcate fear and dampen curiosity, and they tend to think computer science is a little scary -- despite wanting to major in it.

Second, when I speak to children in my usual vocabulary, I take the time to explain what words mean. Sometimes they ask, and sometimes I notice a quizzical curious look on their faces. Elaboration of ideas and words gives us more to talk about (a good thing) and connects to other parts of their knowledge (also good). And I'm sure that I don't speak to kids using only thirteen-letter words; that's not the nature of regular life, at least in my house. In computing jargon words of excessive length are the norm.

So I don't think there's a contradiction in these two ideas. Felleisen is reminding us to speak to students as if they are learners, which they are, and to use language carefully, not simplistically.

Even if there is a contradiction, I don't mind. It would not be the only contradiction I bear. Strange as it may sound, I try to be true to both of these ideas in my teaching. I try not to talk down to my students, instead talking to them about real problems and real solutions and cool ideas. My goal is to help students reach up to the vocabulary and ideas as they need, offering scaffolding in language and tools when they are helpful.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 30, 2008 12:40 PM

Scripting, CS1, and Language Theory

Yesterday, I wrote a bit about scripting languages. It seems odd to have to talk about the value of scripting languages in 2008, as Ronald Loui does in his recent IEEE Computer article, but despite their omnipresence in industry, the academic world largely continues to prefer traditional systems languages. Some of us would like to see this change. First, let's consider the case of novice programmers.

Most scripting languages lack some of the features of systems languages that are considered important for learners, such as static typing. Yet these "safer" languages also get in the way of learning, as Loui writes, by imposing "enterprise-sized correctness" on the beginner.

Early programmers must learn to be creative and inventive, and they need programming tools that support exploration rather than production.

This kind of claim has been made for years by advocates of languages such as Scheme for CS1, but those languages were always dismissed by "practical" academics as toy languages or niche languages. Those people can't dismiss scripting languages so easily. You can call Python and Perl toy languages, but they are used widely in industry for significant tasks. The new ploy of these skeptics is to speak of the "scripting language du jour" and to dismiss them as fads that will disappear while real languages (read: C) remain.

What scripting language would be the best vehicle for CS1? Python has had the buzz in the CS ed community for a while. After having taught a little PHP las semester, I would deem it too haphazard for CS1. Sure, students should be able to do powerful things, but the pocket-protected academic in me prefers a language that at least pretends to embody good design principles, and the pragmatist in me prefers a language that offers a smoother transition into languages beyond scripting. JavaScript is an idea I've seen proposed more frequently of late, and it is a choice with some surprising positives. I don't have enough experience with it to say much, but I am a little concerned about the model that programming in a browser creates for beginning students.

Python and Ruby do seem like the best choices among the scripting languages with the widest and deepest reach. As Loui notes, few people dislike either, and most people respect both, to some level. Both have been designed carefully enough to be learned by beginners and and support a reasonable transition as students move to the next level of the curriculum. Having used both, I prefer Ruby, not only for its OO-ness but also for how free I feel when coding in it. But I certainly respect the attraction many people have to Python, especially for its better developed graphics support.

Some faculty ask whether scripting languages scale to enterprise-level software. My first reaction is: For teaching CS1, why should we care? Really? Students don't write enterprise-level software in CS1; they learn to program. Enabling creativity and supporting exploration are more important than the speed of the interpreter. If students are motivated, they will write code -- a lot of it. Practice makes perfect, not optimized loop unrolling and type hygiene.

My second reaction is that these languages scale quite nicely to real problems in industry. That is why they have been adopted so widely. If you need to process a large web access log, you really don't want to use Java, C, or Ada. You want Perl, Python, or Ruby. This level of scale gives us access to real problems in CS1, and for these tasks scripting languages do more than well enough. Add to that their simplicity and the ability to do a lot with a little code, and student learning is enhanced.

Loui writes, "Indeed, scripting languages are not the answer for long-lasting, CPU-intensive nested loops." But then, Java and C++ and Ada aren't the answer for all the code we write, either. Many of the daily tasks that programmers perform lie in the space better covered by scripting languages. After learning a simpler language that is useful for these daily tasks, students can move on to larger-scale problems and learn the role of a larger-scale language in solving them. That seems more natural to me than going in the other direction.

Now let's consider the case of academic programming languages research. A lot of interesting work is being done in industry on the design and implementation of scripting language, but Loui laments that academic PL research still focus on syntactic and semantic issues of more traditional languages.

Actually, I see a lot of academic work on DSLs -- domain-specific languages -- that is of value. One problem is this research is so theoretical that it is beyond the interest of programmers in the trenches. Then again, it's beyond the mathematical ability and interest of many CS academics, too. (I recently had to comfort a tech entrepreneur friend of mine who was distraught that he couldn't understand even the titles of some PL theory papers on the resume of a programmer he was thinking of hiring. I told him that the lambda calculus does that to people!)

Loui suggest that PL language research might profitably move in a direction taken by linguistics and consider pragmatics rather than syntax and semantics. Instead of proving something more about type systems, perhaps a languages researcher might consider "the disruptive influence that Ruby on Rails might have on web programming". Studying how well "convention over configuration" works in practice might be of as much use as incrementally extending a compiler optimization technique. The effect of pragmatics research would further blur the line between programming languages and software engineering, a line we have seen crossed by some academics from the PLT Scheme community. This has turned out to be practical for PL academics who are interested in tools that support the programming process.

Loui's discussion of programming pragmatics reminds me of my time in studying knowledge-based systems. Our work was pragmatic, in the sense that we sought to model the algorithms and data organization that expert problem solvers used, which we found to be tailored to specific problem types. Other researchers working on such task-specific architectures arrived at models consistent with ours. One particular group went beyond modeling cognitive structures to the sociology of problem solving, John McDermott's lab at Carnegie Mellon. I was impressed by McDermott's focus on understanding problem solvers in an almost anthropological way, but at the time I was hopelessly in love with the algorithm and language side of things to incorporate this kind of observation into my own work. Now, I recognize it as the pragmatics side of knowledge-based systems.

(McDermott was well known in the expert systems community for his work on the pioneering programs R1 and XCON. I googled him to find out what he was up to these days but didn't find much, but through some publications, I infer that he must now be with the Center for High Assurance Computer Systems at the Naval Research Laboratory. I guess that accounts for the sparse web presence.)

Reading Loui's article was an enjoyable repast, though even he admits that much of the piece reflects old arguments from proponents of dynamic languages. It did have, I think, at least one fact off track. He asserts that Java displaced Scheme as the primary language used in CS1. If that is true, it is so only for a slender subset of more elite schools, or perhaps that Scheme made inroads during a brief interregnum between Java and ... Pascal, a traditional procedural language that was small and simple enough to mostly stay out of the way of programmers and learners.

As with so many current papers, one of the best results of reading it is a reminder of a piece of classic literature, in this case Ousterhout's 1998 essay. I usually read this paper again each time I teach programming languages, and with my next offering of that course to begin in three weeks, the timing is perfect to read it again.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 29, 2008 2:03 PM

Scripting Languages, Software Development, and Novice Programmers

Colleague and reader Michael Berman pointed me to the July 2008 issue of IEEE Computer, which includes an article on the virtues of scripting languages, Ronald Loui's In Praise of Scripting: Real Programming Pragmatism. Loui's inspiration is an even more important article in praise of scripting, John Ousterhout's classic Scripting: Higher Level Programming for the 21st Century. Both papers tell us that scripting deserves more respect in the hierarchy of programming and that scripting languages deserve more consideration in the programming language and CS education communities.

New programming languages come from many sources, but most are created to fill some niche. Sometimes the niche is theoretical, but more often the creators want to be able to do something more easily than they can with existing languages. Scripting languages in particular tend to originate in practice, to fill a niche in the trenches, and grow from there. Sometimes, they come to be used just like a so-called general-purpose programming language.

When programmers have a problem that they need solve repeatedly, they want a language that gives them tools that are "ready at hand". For these programming tasks, power comes from the level of abstraction provided by built-in tools. Usually these tools are chosen to fill the needs of a specific niche, but they almost always include the ability to process text conveniently, quickly, and succinctly.

Succinctness is a special virtue of scripting languages. Loui mentions the virtue of short source code, and I'm surprised that more people don't talk about the value of small programs. Loui suggests one advantage that I rarely see discussed: languages that allow and even encourage short programs enable programmers to get done with a task before losing motivation or concentration. I don't know how important this advantage is for professional programmers; perhaps some of my readers who work in the real world can tell me what they think. I can say, though, that, when working with university students, and especially novice programmers, motivation or concentration are huge factors. I sometimes hear colleagues say that students who can't stay motivated and concentrate long enough to solve an assignment in C++, Ada, or Java probably should not be CS majors. This seems to ignore reality, both of human psychology and of past experience with students. Not to mention the fact that teach non-majors, too.

Another advantage of succinctness Loui proposes relates to programmer error. System-level languages include features intended to help programmers make fewer errors, such as static typing, naming schemes, and verbosity. But they also require programmers to spend more time writing code and to write more code, and in that time programmers find other ways to err. This is, too, is an interesting claim if applied to professional software development. One standard answer is that software development is not "just" programming and that such errors would disappear if we simply spent more time up-front in analysis, modeling, and design. Of course, these activities add even more time and more product to the lifecycle, and create more space for error. They also put farther in the future the developers' opportunity to get feedback from customers and users, which in many domains is the best way to eliminate the most important errors that can arise when making software.

Again, my experience is that students, especially CS1 students, find ways to make mistakes, regardless of how safe their language is.

One way to minimize errors and their effects is to shrink the universe of possible errors. Smaller programs -- less code -- is one way to do that. It's harder to make as many or even the same kind of errors in a small piece of code. It's also easier to find and fix errors in a small piece of code. There are exceptions to both of these assertions, but I think that they hold in most circumstances.

Students also have to be able to understand the problem they are trying to solve and the tools they are using to solve it. This places an upper bound on the abstraction level we can allow in the languages we give our novice students and the techniques we teach them. (This has long been an argument made by people who think we should not teach OO techniques in the first year, that they are too abstract for the minds of our typical first-year students.) All other things equal, concrete is good for beginning programmers -- and for learners of all kinds. The fact that scripting languages were designed for concrete tasks means that we are often able to make the connection for students between the languages abstractions and tasks they can appreciate, such as manipulating images, sound, and text.

My biases resonate with this claim in favor of scripting languages:

Students should learn to love their own possibilities before they learn to loathe other people's restrictions.

I've always applied this sentiment to languages such as Smalltalk and Scheme which, while not generally considered scripting languages, share many of the features that make scripting languages attractive.

In this regard, Java and Ada are the poster children in my department's early courses. Students in the C++ track don't suffer from this particular failing as much because they tend not to learn C++ anyway, but a more hygienic C. These students are more likely to lose motivation and concentration while drowning in an ocean of machine details.

When we consider the problem of teaching programming to beginners, this statement by Loui stands out as well:

Students who learn to script early are empowered throughout their college years, especially in the crucial Unix and Web environments.

Non-majors who want to learn a little programming to become more productive in their disciplines of choice don't get much value at all from one semester learning Java, Ada, or C++. (The one exception might be the physics majors, who do use C/C++ later.) But even majors benefit from learning a language that they might use sooner, say, in a summer job. A language like PHP, JavaScript, or even Perl is probably the most valuable in this regard. Java is the one "enterprise" language that many of our students can use in the summer jobs they tend to find, but unfortunately one or two semesters are not enough for most of them to master enough of the language to be able to contribute much in a professional environment.

Over the years, I have come to think that even more important than usefulness for summer jobs is the usefulness a language brings to students in their daily lives, and the mindset it fosters. I want CS students to customize their environments. I want them to automate the tasks they do every day when compiling programs and managing their files. I want them to automate their software testing.

When students learns a big, verbose, picky language, they come to think of writing a program as a major production, one that may well cause more pain in the short term than it relieves in the long term. Even if that is not true, the student looks at the near-term pain and may think, "No, thanks." When students learn a scripting language, they can see that writing a program should be as easy as having a good idea -- "I don't need to keep typing these same three commands over and over", or "A program can reorganize this data file for me." -- and writing it down. A program is an idea, made manifest in an executable form. They can make our lives better. Of all people, computer scientists should be able to harness their power -- even CS students.

This post has grown to cover much more than I had originally planned, and taken more time to write. I'll stop here for now and pick up this thread of thought in my next entry.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 11, 2008 11:12 AM

Wadler on Abelson and Sussman

After reading Lockhart, I read Matthias Felleisen's response to Lockhart, and from there I read Matthias's design for a second course to follow How to Design Programs. From an unlinked reference there, I finally found A Critique of Abelson and Sussman, also known as "Why Calculating Is Better Than Scheming" (and also available from the ACM Digital Library). I'm not sure why I'd never run into this old paper before; it appeared in a 1987 issue of the SIGPLAN Notices. In any case, I am glad I did now, because it offers some neat insights on teaching introductory programming. Some of you may recall its author, Philip Wadler, from his appearance in this blog as Lambda Man a couple of OOPSLAs ago.

In this paper, Wadler argues that Structure and Interpretation of Computer Programs, which I have lauded as one of the great CS books, could be improved as a vehicle for teaching introductory programming by using a language other than Scheme. In particular, he thinks that four particular language features are helpful, if not essential:

  • pattern matching
  • a more mathematics-like syntax
  • types, both static and user-defined
  • lazy evaluation

Read the paper for an excellent discussion of each, but I will summarize. Pattern matching pulls syntax of many decisions out of a single function and creates separate expressions for each. This is similar to writing separate functions for each case, and in some ways resembles function overloading in languages such as Java and C++. A syntax more like traditional math notation is handy when teaching students to derive expressions and to reason about values and correctness. Static typing requires code to state clearly the kinds of objects it manipulates, which eliminates a source of confusion for students. Finally, lazy evaluation allows programs to express meaningful ideas in a natural way without having the language enforce conclusions that are not strictly necessary. This can be also be useful when doing derivation and proof, but it also opens the door to some cool applications, such as infinite streams.

We teach functional programming and use some of these concepts in a junior-/senior-level programming languages course, where many of Wadler's concerns are less of an issue. (They do come into play with a few students, hough; Wadler might say we wouldn't have these problems if we taught our intro course differently!) But for freshmen, the smallest possibilities of confusion become major confusions. Wadler offers a convincing argument for his points, so much so that Felleisen, a Scheme guy throughout, has applied many of these suggestions in the TeachScheme! project. Rather than switching to a different language, the TeachScheme! team chose to simplify Scheme through a series of "teaching languages" that expose concepts and syntax just-in-time.

If you want evidence that Wadler is describing a very different way to teach introductory programming, consider this from the end of Section 4.1:

I would argue that the value of lazy evaluation outweighs the value of being able to teach assignment in the first course. Indeed, I believe there is great value in delaying the introduction of assignment until after the first course.

The assignment statement is like mom and apple pie in most university CS1 courses! The typical CS faculty could hardly conceive of an intro course without assignment. Abelson and Sussman recognized that assignment need not be introduced so early by waiting until the middle of SICP to use set. But for most computer scientists and CS faculty, postponing assignment would require a Kuhn-like paradigm shift.

Advocates of OOP in CS1 encountered this problem when they tried to do real OOP in the first course. Consider the excellent Object-Oriented Programming in Pascal: A Graphical Approach, which waiting until the middle of the first course to introduce if-statements. From the reaction of most faculty I know, you would have thought that Conner, Niguidula, and van Dam were asking people to throw away The Ten Commandments. Few universities adopted the text despite its being a wonderful and clear introduction to programming in an object-oriented style. As my last post noted, OOP causes us to think differently, and if the faculty can't make the jump in CS1 then students won't -- even if the students could.

(There is an interesting connection between the Conner, Niguidula, and van Dam approach and Wadler's ideas. The former postpones explicit decision structures in code by distributing them across objects with different behavior. The latter postpones explicit decision structures by distributing them across separate cases in the code, which look like overloaded function definitions. I wonder if CS faculty would be more open to waiting on if-statements through pattern matching than they were through the dynamic polymorphism of OOP?)

Wadler indicates early on that his suggestions do not presuppose functional programming except perhaps for lazy evaluation. Yet his suggestions are not likely to have a wide effect on CS1 in the United States any time soon, because even if they were implemented in a course using an imperative language, most schools simply don't teach CS1 in a way compatible with these ideas. Still, we would be wise to take them to heart, as Felleisen did, and use them where possible to help us make our courses better.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 10, 2008 1:54 PM

Object-Oriented Algorithm Flashback

Most papers presented at SIGCSE and the OOPSLA Educators' Symposium are about teaching methods, not computational methods. When the papers do contain new technical content, it's usually content that isn't really new, just new to the audience or to mainstream use in the classroom. The most prominent example of the latter that comes to mind immediately is the series of papers by Zung Nguyen and Stephen Wong at SIGCSE on design patterns for data structures. Those papers were valuable in principle because they showed that how one conceives of containers changes when one is working with objects. In practice, they sometimes missed their mark because they were so complex that many teachers in the audience said, "Cool! But I can't do that in class."

However, the OOPSLA Educators' Symposium this year received a submission with a cool object-oriented implementation of a common introductory programming topic. Unfortunately, it may not have made the cut for inclusion based on some technical concerns of the committee. Even so, I was so happy to see this paper and to play with the implementation a little on the side! It reminded me of one of the first efforts I saw in a mainstream CS book to show how we think differently about a problem we all know and love when working with objects. That was Tim Budd's implementation of the venerable eight queens problem in An Introduction to Object-Oriented Programming.

Rather than implement the typical procedural algorithm in an object-oriented language, Budd created a solution that allowed each queen to solve the problem for herself by doing some local computation and communicating with the queen to her right. I remember first studying his code to understand how it worked and then showing it to colleagues. Most of them just said, "Huh?" Changing how we think is hard, especially when we already have a perfectly satisfactory solution for the problem in mind. You have to want to get it, and then work until you do.

You can still find Budd's code from the "download area" link on the textbook's page, though you might find a more palatable version in the download area for the book's second edition. I just spent a few minutes creating a Ruby version, which you are welcome to. It is slightly Ruby-ized but mostly follows Budd's solution for now. (Note to self: have fun this weekend refactoring that code!)

Another thing I liked about "An Introduction to Object-Oriented Programming" was its linguistic ecumenism. All examples were given in four languages: Object Pascal, C++, Objective C, and Smalltalk. The reader could learn OOP without tying it to a single language, and Budd could point out subtle differences in how the languages worked. I was already a Smalltalk programmer and used this book as a way to learn some Objective C, a skill which has been useful again this decade.

(Budd's second edition was a step forward in one respect, by adding Java to the roster of languages. But it was also the beginning of the end. Java soon became so popular that the next version of his book used Java only. It was still a good book for its time, but it lost some of its value when it became monolingual.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 07, 2008 12:48 PM

More on Problems and Art in Computer Science

Last week I wrote about an essay by Paul Lockhart from a few years ago that has been making the rounds this year. Lockhart lamented that math is so poorly misrepresented in our schools that students grow up missing out on its beauty, and even still not being able to perform the skills in whose name we have killed scholastic math. I've long claimed that we would produce more skilled students if we allowed them to approach these skills from the angle of engaging problems. For Lockhart, such problems come form the minds of students themselves and may have no connection to the "real world".

In computer science, I think letting students create their own problems is also quite valuable. It's one of the reasons that open-ended project courses and independent undergraduate research so often lead to an amazing level of learning. When a group of students wants to train a checkers-playing program to learn from scratch, they'll figure out ways to do it. Along the way, they learn a ton -- some of it material I would have selected for them, some beyond what I would have guessed.

The problems CS students create for themselves often do come straight out of their real world, and that's okay, too. Many of us CS love abstract problems such as the why of Y, but for most of us -- even the academics who make a living in the abstractions -- came to computing from concrete problems. I think I was this way, starting when I learned Basic in high school and wanted to make things, like crosstables for chess tournaments and ratings for the players in our club. From there, it wasn't that far a journey into Gödel, Escher, Bach and grad school! Along the way, I had professors and friends who introduced me to a world much larger than the one in which I wrote programs to print pages on which to record chess games.

This is one reason that I tout Owen Astrachan's problem-based learning project for CS. Owen is interested in problems that come from the real world, outside the minds of the crazy but harmless computer scientists he and I know, love, and are. These are the problems that matter to other people, which is good for the long-term prospects of our discipline and great for hooking the minds of kids on the beauty and power of computing. For computer science students, I am a proponent of courses built around projects, because they are big enough to matter to CS students and big enough to teach them lessons they can't learn working on smaller pieces of code.

With an orientation toward the ground, discussions of functional programming versus object-oriented programming seem almost not to matter. Students can solve any problem in either style, right? So who cares? Well, those of us who teach CS care, and our students should, too, but it's important to remember that this is an inward-looking discussion that won't mean much to people outside of CS. It also won't matter much to our students as they first begin to study computer science, so we can't turn our first-year courses into battlegrounds of ideology. We need to be sure that, whatever style we choose to teach first, we teach it in a way that helps students solve problems -- and create the problems that interest them. They style needs to feel right for the kind of problems we expose them to, so that the students can begin to think naturally about computational solutions.

In my department we have for more than a decade introduced functional programming as a style in our programming languages course, after students have seen OOP and procedural programming. I see a lot of benefit in teaching FP sooner, but that's would not fit our faculty all that well. (The students would probably be fine!) Functional programming has a natural home in our languages course, where we teach it as an especially handy way of thinking about how programming languages work. This is a set of topics we want students to learn anyway, so we are able to introduce and practice a new style in the context of essential content, such as how local variables work and how to write a parser. If a few students pick up on some of the beautiful ideas and go do something crazy, like fire up a Haskell interpreter and try to grok monads, well, that's just fine.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 04, 2008 8:37 AM

Science, Education, and Independent Thought

I wrote about a recent CS curricular discussion, which started with a blog posting by Mark Guzdial. Reading the comments to Guzdial's post is worth the time, as you'll find a couple of lengthy remarks by Alan Kay. As always, Kay challenges even computer science faculty to think beyond the boundaries of our discipline to the role what our students learn from us plays in a democratic world.

One of Kay's comments caught my attention for connections to a couple of things I've written about in recent years. First, consider this:

I posit that this is still the main issue in America. "Skilled children" is too low a threshold for our system of government: we need "educated adults". ... I think the principle is clear and simple: there are thresholds that have to be achieved before one can enter various conversations and processes. "Air guitar and attitude" won't do.

Science is a pretty good model (and it was used by the framers of the US). It is a two level system. The first level has to admit any and all ideas for consideration (to avoid dogma and becoming just another belief system). But the dues for "free and open" are that science has built the strongest system of critical thinking in human history to make the next level threshold for "worthy ideas" as high as possible. This really works.

This echoes the split mind of a scientist: willing to experiment with the widest set of ideas we can imagine, then setting the highest standard we can imagine for accepting the idea as true. As Kay goes on to say, this approach is embedded in the fabric of the American mentality for free society and government. This is yet another good reason for all students to learn and appreciate modern science; it's not just about science.

Next, consider this passage that follows soon after:

"Air guitar" is a metaphor for choosing too tiny a subset of a process and fooling oneself that it is the whole thing. ... You say "needs" and I agree, but you are using it to mean the same as "wants", and it is simply not the case that education should necessarily adapt to the "wants" of students. This is where the confusion of education and marketing enters. The marketeers are trying to understand "wants" (and even inject more) and cater to them for a price; real educators are interested in "needs" and are trying to fulfill these needs. Marketeers are not trying to change but to achieve fit; educators are trying to change those they work with. Introducing marketing ideas into educational processes is a disaster in the making.

I've written occasionally about ideas from marketing, from the value of telling the right story to the creating of new programs. I believe those things and think that we in academia can learn a lot from marketers with the right ideas. Further, I don't think that any of this is in conflict with what Kay says here. He and I agree that we should not change our curriculum to cater solely to the perceptions and base desires of our clientele, whether students, industry, or even government. My appeal to marketing for inspiration lies in finding better ways to communicate what we do and offer and in making sure that what we do and offer are in alignment with the long-term viability of the culture. The best companies are in business for the long haul and must stay aligned with the changing needs of the world.

Further, as I am certain Kay will agree based on many of the things he has said about Apple of the 1980s, the very best companies create and sell products that their customers didn't even know they wanted. We in academia might learn something from the Apples of our world about how to provide the liberal and professional education that our students need but don't realize they need. The same goes for convincing state legislatures and industry when they view too short a horizon for what we do.

Like Kay, I want to give my students "real computing" and "real education".

I think it is fitting and proper to talk about these issues on Independence Day in the United States. We depend on education to preserve the democratic system in which we live and the values by which we live. But there's more. Education -- including, perhaps especially, science -- creates freedom in the student. The mind becomes free to think greater thoughts and accomplish greater deeds when it has been empowered with our best ideas. Science is one.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 02, 2008 10:23 AM

Math and Computing as Art

So no, I'm not complaining about the presence
of facts and formulas in our mathematics classes,
I'm complaining about the lack of mathematics
in our mathematics classes.
-- Paul Lockhart

A week or so ago I mentioned reading a paper called A Mathematician's Lament by Paul Lockhart and said I'd write more on it later. Yesterday's post, which touched on the topic of teaching what is useful reminded me of Lockhart, a mathematician who stakes out a position that is at once diametrically opposed to the notion of teaching what is useful about math and yet grounded in a way that our K-12 math curriculum is not. This topic is especially salient for me right now because our state is these days devoting some effort to the reform of math and science education, and my university and college are playing a leading role in the initiative.

Lockhart's lament is not that we teach mathematics poorly in our K-12 schools, but rather that we don't teach mathematics at all. We teach definitions, rules, and formal systems that have been distilled away from any interesting context, in the name of teaching students skills that will be useful later. What students do in school is not what mathematicians do, and that's a shame, because mathematicians is fun, creative, beautiful -- art.

As Lockhart described his nightmare of music students not being allowed to create or even play music, having to copy and transpose sheet music, I cringed, because I recognized how much of our introductory CS courses work. As he talked about how elementary and HS students never get to "hear the music" in mathematics, I thought of Brian Greene's Put a Little Science in Your Life, which laments the same problem in science education. How have we managed to kill all that is beautiful in these wonderful ideas -- these powerful and even useful ideas -- in the name of teaching useful skills? So sad.

Lockhart sets out an extreme stance. Make math optional. Don't worry about any particular content, or the order of topics, or any particular skills.

Mathematics is the music of reason. To do mathematics is to engage in an act of discovery and conjecture, intuition and inspiration; to be in a state of confusion--not because it makes no sense to you, but because you gave it sense and you still don't understand what your creation is up to; to have a breakthrough idea; to be frustrated as an artist; to be awed and overwhelmed by an almost painful beauty; to be alive, damn it.

I teach computer science, and this poetic sense resonates with me. I feel these emotions about programs all the time!

In the end, Lockhart admits that his position is extreme, that the pendulum has swung so far to the "useful skills" side of the continuum he feels a need to shout out for the "math is beautiful" side. Throughout the paper he tries to address objections, most of which involve our students not learning what they need to know to be citizens or scientists. (Hint: Does anyone really think that most students learn that now? How much worse off could we be to treat math as art? Maybe then at least a few more students would appreciate math and be willing to learn more.)

This paper is long-ish -- 25 pages -- but it is a fun read. His screed on high school geometry is unrestrained. He calls geometry class "Instrument of the Devil" because it so thoroughly and ruthlessly kills the beauty of proof:

Other math courses may hide the beautiful bird, or put it in a cage, but in geometry class it is openly and cruelly tortured.

His discussion of proof as a natural product of a student's curiosity and desire to explain an idea is as well written as any I've read. It extends another idea from earlier in the paper that fits quite nicely with something I have written about computer science: Mathematics is the art of explanation.

By concentrating on what, and leaving out why, mathematics is reduced to an empty shell. The art is not in the "truth" but in the explanation, the argument. It is the argument itself which gives the truth its context, and determines what is really being said and meant. Mathematics is the art of explanation. If you deny students the opportunity to engage in this activity--to pose their own problems, make their own conjectures and discoveries, to be wrong, to be creatively frustrated, to have an inspiration, and to cobble together their own explanations and proofs--you deny them mathematics itself.

I am also quite sympathetic to one of the other themes that runs deeply in this paper:

Mathematics is about problems, and problems must be made the focus of a student's mathematical life.

(Ditto for computer science.)

... you don't start with definitions, you start with problems. Nobody ever had an idea of a number being "irrational" until Pythagoras attempted to measure the diagonal of a square and discovered that it could not be represented as a fraction.

Problems can motivate students, especially when students create their own problems. That is one of the beautiful things about math: almost anything you see in the world can become a problem to work on. It's also true of computer science. Students who want to write a program to do something -- play a game, predict a sports score, track their workouts -- will go out of their way to learn what they need to know. I'm guessing anyone who has taught computer science for any amount of time has experienced this first hand.

As I've mentioned here a few times, my colleague Owen Astrachan is working on a big project to explore the idea of problem-based learning in CS. (I'm wearing the project's official T-shirt as I type this!) This idea is also right in line with Alan Kay's proposal for an "exploratorium" of problems for students who want to learn to commmunicate via computation, which I describe in this entry.

I love this passage from one of Lockhart's little dialogues:

SALVIATI:     ... people learn better when the product comes out of the process. A real appreciation for poetry does not come from memorizing a bunch of poems, it comes from writing your own.

SIMPLICIO:     Yes, but before you can write your own poems you need to learn the alphabet. The process has to begin somewhere. You have to walk before you can run.

SALVIATI:     ... No, you have to have something you want to run toward.

You just have to have something you want to run toward. For teenaged boys, that something is often a girl, and suddenly the desire to write a poem becomes a powerful motivator. We should let students find goals to run toward in math and science and computer science, and then teach them how.

It's interesting that I end with a running metaphor, and not just because I run. My daughter is a sprinter and now hurdler on her school track team. She sprints because she likes to run short distances and hates to run anything long (where, I think, "long" is defined as anything longer than her race distance!). The local runners' club leads a summer running program for high school students, and some people thought my daughter would benefit. One benefit of the program is camaraderie; one drawback that it involves serious workouts. Each week the group does a longer run, a day of interval training, and a day of hill work.

I suggested that she might be benefit more from simply running more -- not doing workouts that kill her, just building up a base of mileage and getting stronger while enjoying some longer runs. My experience is that it's possible to get over the hump and go from disliking longs runs to enjoying them. Then you can move on to workouts that make you faster. So she and I are going to run together a couple of times a week this summer, taking it easy, enjoying the scenery, chatting and otherwise not stressing about "long runs".

There is an element of beauty versus duty in learning most things. When the task is all duty, you may do it, but you may never like it. Indeed, you may come to hate it and stop altogether when the external forces that keep you on task (your teammates, your sense of belonging) disappear. When you enjoy the beauty of what you are doing, everything else changes. So it is with math, I think, and computer science, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Running, Teaching and Learning

July 01, 2008 4:21 PM

A Small Curricular Tempest

A couple of weeks ago I linked to Shriram Krishnamurthi, who mentioned a recent SIGPLAN-sponsored workshop that has proposed a change to ACM's curriculum guidelines. The change is quite simple, shifting ten hours of instruction in programming languages from small topics into a single ten-hour category called "functional programming". Among the small topics that would be affected, coverage of recursion and event-driven programming would be halved, and coverage of virtual machines and language translation would no longer be mandated separately, nor would an "overview" of programming languages.

In practice, the proposal to eliminate coverage of some areas has less effect than you might think. Recursion is a natural topic in functional programming, and event-driven programming is a natural topic on object-oriented programming. The current recommendation of three hours total to cover virtual machines and language translation hardly does them justice anyway; students can't possibly learn any of the valuable ideas in depth in that amount of time. If schools adopt this change, they would be spending the time spent more productively helping students to understand functional programming well. Many schools will probably continue to teach those topics as part of their principles of programming languages course anyway.

I didn't comment on the proposal in detail earlier because it seemed more like the shuffling of deck chairs than a major change in stance. I do approve of the message the proposal sends, namely that functional programming is important enough to be a core topic in computer science. Readers of this blog already know where I stand on that.

Earlier this week, though, Mark Guzdial blogged Prediction and Invention: Object-oriented vs. functional, which has created some discussion in several circles. He starts with "The goal of any curriculum is to prepare the students for their future." Here is my take.

Mark seems to be saying that functional programming is not sufficiently useful to our students to make it a core programming topic. Mandating that schools teach ten hours each of functional and object-oriented programming, he thinks, tells our students that we faculty believe functional programming is -- or will be -- as important as object-oriented programming to their professional careers. Our students get jobs in companies that primarily use OO languages and frameworks, and our curricula should reflect that.

This piece has a vocational tone that I find surprising coming from Mark, and that is perhaps what most people are reacting to when they read it. When he speaks of making sure the curriculum teaches what is "real" to students, or how entry-level programmers often find themselves modifying existing code with an OO framework, it's easy to draw a vocational theme from his article. A lot of academics, especially computer scientists, are sensitive to such positions, because the needs of industry and the perceptions of our students already exert enough pressure on CS curriculum. In practical terms, we have to find the right balance between practical skills for students and the ideas that underlie those skills and the rest of computing practice. We already know that, and "esoteric" topics such as functional programming and computing theory are already part of that conversation.

Whether Mark is willing to stand behind the vocational argument or not, I think there is another theme in his piece that also requires a balance he doesn't promote. It comes back to the role of curriculum guidelines in shaping what schools teach and expressing what we think students should learn. Early on, he says,

I completely disagree that we should try to mandate that much functional programming through manipulation of the curriculum standards.

And later:

Then, when teaching more functional programming becomes a recognized best practice, it will be obvious that it should be part of the curriculum standards.

The question is whether curriculum standards should be prescriptive or descriptive. Mark views the current SIGPLAN proposal as prescribing an approach that contradicts both current best practice and the needs of industry, rather describing best practice in schools around the country. And he thinks curriculum standards should be descriptive.

I am sensitive to this sort of claim myself, because -- like Mark! -- I have been contending for many years with faculty who think OOP is a fad and has no place in a CS curriculum, or at least in our first-year courses. These faculty, both at my university and throughout the country, argue that our courses should be about what students "really do" in the world, not about esoteric design patterns and programming techniques. In the end, these people end up claiming that people like me are trying to prescribe a paradigm for how our students should think.

The ironic thing, of course, is that over the last fifteen years OOP and Java have gone from being something new to the predominant tools in industry. It's a good things that some schools started teaching more OOP, even in the first year, and developing the texts and teaching materials that other schools could use to join in later.

(The people arguing against OOP in the first year have not given up the case; they've now shifted to claiming that we should teach even Java "fundamentals first", going "back to basics" before diving into all that complicated stuff about data and procedures bearing some relation to one another. I've written about that debate before and have tremendous respect for many of the people on the front line of "basics" argument. I still disagree.)

As in the case of vocational versus theoretical content, I think we need to find the right balance between prescriptive and descriptive curriculum standards. These two dimensions are not wholly independent of each other, but they are different and so call for different balances. I agree with Mark that at least part of our curriculum standard should be descriptive of current practice, both in universities and in industry. Standard curricular practice is important in helping to create some consistency across universities and helping to keep schools who are out of the know on a solid and steady path. And the simple fact is that our students do graduate into professional careers and need to be prepared to participate in an economy that increasingly depends on information technology. For those of us at state-supported universities, this is a reasonable expectation of the people who pay our bills.

However, I think that we also need some prescriptive elements to our curricula. As Alan Kay says in a comment on Mark's blog, universities have a responsibility not only to produce graduates capable in participating in the economy but also to help students become competent, informed citizens in a democracy. This is perhaps even more important at state-supported universities, which serve the citizenry of the state. This may sound too far from the ground when talking about computer science curriculum, but it's not. The same ideas apply -- to growing informed citizens, and to growing informed technical professionals.

The notion that curriculum standards are partly prescriptive is not all that strange, because it's not that different from how curriculum standards have worked in the past, really. Personally, I like having experts in areas such as programming languages and operating systems helping us keep our curricular standards up to date. I certainly value their input for what they know to be current in the field. I also value their input because they know what is coming, what is likely to have an effect on practice in the near future, and what might help students understand better the more standard content we teach.

At first I had a hard time figuring out Mark's position, because I know him to grok functional programming. Why was he taking this position? What were his goals? His first paragraph seems to lay out his goal for the CS curriculum:

The goal of any curriculum is to prepare the students for their future. In just a handful of years, teachers aim to give the students the background to be successful for several decades.

He then recognizes that "the challenge of creating a curriculum is the challenge of predicting the future."

These concerns seem to sync quite nicely with the notion of encouraging that all students learn a modicum about functional programming! I don't have studies to cite, but I've often heard and long believed that the more different programming styles and languages a person learns, the better a programmer she will be. Mark points to studies show little direct transfer from skills learned in one language to skills learned in another, and I do not doubt their truth. But I'm not even talking about direct transfer of knowledge from functional programming to OOP; I'm thinking of the sort of expansion of the mind that happens when we learn different ways to think about problems and implement solutions. A lot of the common OO design patterns borrow ideas from other domains, including functional programming. How can we borrow interesting ideas if we don't know about them?

It is right and good that our curriculum standards push a little beyond current technical and curricular practice, because then we are able to teach ideas that can help computing evolve. This evolution is just as important in the trenches of a web services group at an insurance company as it is to researchers doing basic science. In the particular case of functional programming, students learn not only beautiful ideas but also powerful ideas, ideas that are germinating now in the development of programming languages in practice, from Ruby and Python to .NET. Our students need those ideas for their careers.

As I mentioned, Alan Kay chimed in with a few ideas. I think he disagrees that we can't predict the future by inventing it through curriculum. His idealism on these issues seems to frustrate some people, but I find it refreshing. We can set our sights higher and work to make something better. When I used the allusion to "shuffling the deck chairs" above, I was thinking of Kay, who is on record as saying that how we teach CS is broken. He has also talked to CS educators and exhorted us to set our sights higher. Kay supports the idea of prescriptive curricula for a number of reasons, the most relevant of which to this conversation is that we don't want to hard-code accidental or misguided practice, even if it's the "best" we have right now. Guzdial rightly points out that we don't want to prescribe new accidental or misguided practices, either. That's where the idea of striking a balance comes in for me. We have to do our best to describe what is good now and prescribe at least a little of what is good for the future.

I see no reason that we can't invent good futures by judiciously defined curriculum, any more than inventing futures in other arenas. Sure, we face social, societal, and political pressures, but how many arenas don't?

So, what about the particular curriculum proposal under discussion? Unlike Guzdial, I like the message it sends, that functional programming is an important topic for all CS grads to learn about. But in end I don't think it will cause any dramatic changes in how CS departments work. I used the word "encourage" above rather than Guzdial's more ominous "mandate", because even ACM's curriculum standards have no force of law. Under the proposed plan, maybe a few schools might try to present a coherent treatment of functional programming where now they don't, at the expense of covering a few good ideas at a shallow level. There will continue to be plenty of diversity, one of the values that guides Guzdial's vision. On this, he and I agree strongly. Diversity in curricula is good, both for the vocational reasons he asserts but also because we learn even better how to teach CS well from the labors and explorations of others.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 30, 2008 8:50 AM

The Other OOPSLA Submission

In March I talked about a couple of OOPSLA submissions written by our merged ChiliPLoP groups. In May I wrote about the verdict on one but forgot to mention the other. Maybe because the rejection was so much more interesting!

Anyone, our second submission was accepted into the Educators' Symposium. It is not a paper, really, but an extended abstract for an activity we will run: a code review recast as it might happen in a software studio. We hope to give participants a snapshot of what a studio-based course looks, feels, and works like. This is something instructors can try on a small scale in class and, if it works for them, expand throughout their course. Even if code reviews is all the farther they go, we co-authors think that this will be a useful step for many instructors. It draws on experiences in the writers' workshops of PLoP helps students to think about the many design choices they make when writing software, and to make them reflectively rather than subconsciously.

The real trick to this activity will be the homework we give before the symposium:

Before coming to OOPSLA, Educators' Symposium participants will be asked to submit a program, in a language of their choice (though using only standard libraries), which implements the core of a program to generate Tag Clouds from a data set. ...

My experience with many workshops in the past and especially with the Educators' Symposium is that participants never do this kind of homework. Some are well-intentioned but never make time for it, while others figure they can skate by without having written the code. (Sounds as if professors are a lot like their students, huh?) Without code to review, a code review doesn't get very far. We hope that we can find a way to encourage symposium attendees to overcome history and come with some code to workshop.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 27, 2008 3:15 PM

Notes to Your Future Self

I received my copy of the student evaluations from my spring course, which was really three courses that felt like one to me and a few common students. One of the open-ended questions on the form my university uses asks the student to complete this blank:

I could have improved my learning in this course by _____.

After teaching for a while now, I can predict what students will say in this section. This semester was a perfect example.

The second most common answer is a variation of ...

reading the textbook

It doesn't what book I assign; how big it is; whether it is mostly exposition or mostly code; or whether it's written in an academic style or in the vernacular. A few students read the book, and a few of those will typically praise it, but most don't read much if any of it.

The most common answer is a variation of (drumroll, please) ...

starting the homework assignments sooner

Sooner, so they had more time to work on them. Sooner, so that they could ask questions when they were stumped. Sooner for all sorts of reason, but always sooner.

Neither of these bits of wisdom are new to them. If nothing else, they have heard me exhort them to start sooner and to read more. I suppose I could do more to encourage these behaviors, such as giving quick quizzes every day over the reading, but most strategies feel wrong. Do either students or I really want to turn the class into a treadmill of points for their grade? Do I want to have to grade all that stuff? Aren't they adults?

For homework assignments, one thing we instructors can do to encourage beginning sooner is to have short iterations with frequent submission. In my spring course, I had small programming exercises due weekly. I could have tried biweekly submissions, or a problem every other day. But that feels like micromanagement to me. Students have other courses and work schedules to consider, and I like to give them at least a few days to navigate through their various duties. And besides, aren't they adults?

I don't think that students want to be micromanaged with quizzes and deadlines, and I do think think they genuinely mean well. Maybe the problem is that they view the comments they make on student evaluations as backward-looking, when instead they should think of them as forward-looking. Perhaps we should phrase the question as

I will improve my learning in future courses by _____.

The student's evaluation of the instructor would then be more like a retrospective that benefits the student as much as the instructor, because it asks both parties to consider how they can improve their results in the "next iteration" -- the next courses they take. These comments about the future, not the past.

(Ooh, it just occurred to me: perhaps I could try a brief something after one of the assignments early in the course that plays the role of retrospective. Maybe if students consciously consider the value of starting sooner and reading the text during the course they can change their behavior soon enough to affect their performance. This would create more frequent futures!)

What did I learn from these evaluations that will help me in the future? In every course, there is a set of common answers to the item "My learning in this course would have improved if the instructor had _____.", but for this item there are also usually an answer or two that stand out as particularly salient or which teach me something new.

From this course, students reminded me of the value in reviewing solutions to homework assignments soon after they submit their solutions. This is always a valuable tactic, and especially in a course where students are learning new languages and styles of programming. The best way to learn is to see different solutions, including ones that demonstrate idiomatic usage. It gives students a way to compare their solutions and to learn from differences. I let the fact that most of students this semester were juniors and seniors with a fair amount of programming experience convince myself that maybe we could get by without this sort of follow-up. But that was probably just a juicy rationalization that allowed me to "squeeze in more material", to the detriment of my student's learning.

I have written about that before in my blog. This entry is now an example of how I am sometimes no better than my students at practicing what I recommend in a retrospective! Let's see if I can do better at this in the fall.

The rest of my evaluation data was pretty good -- a nice ego boost in a week where I needed one. Consider this an open "thank you" to the students who take time to write both suggestions for improvement and positive comments that let me know that, on the whole, they like my courses. Those positive comments help keep us instructors motivated just as much as positive feedback helps students feel better about hanging in there.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 10, 2008 4:09 PM

Fall Semester Seems Far Away Right Now

I'm not yet thinking about my Programming Languages course this fall, but I wish I were. The first temptation came via my newsreader and Martin Fowler's recent piece ParserFear. Martin is writing a book on domain-specific languages and blogging occasionally on his ideas. This article is about just what the title says, programmers' often irrational fears of rolling a parser, which deter them from implementing their own DSLs. He speculates:

So why is there an unreasonable fear of writing parsers for DSLs? I think it boils down to two main reasons.

  • You didn't do the compiler class at university and therefore think parsers are scary.
  • You did do the compiler class at university and are therefore convinced that parsers are scary.

The first is easy to understand, people are naturally nervous of things they don't know about. The second reason is the one that's interesting. What this boils down to is how people come across parsing in universities.

CS students tend to learn about parsing for the first time in a compiler class, working on a large language with a full range of constructs. Parsing such languages is hard. In a few compiler courses, including my own, students still learn to build a parser by hand and so don't even have the chance to use parser generators as a labor- and pain-saving device. That's okay in a compiler course, which students take in large part to learn how their tools really work.

Martin doesn't suggest that we change the compiler course, and I don't either (though I'm open to possibilities). He does seem to think it's a shame that students are turned off to parsing and language design by first, and perhaps only, seeing them at their most complex. I agree and think that we should do more to introduce these ideas to students earlier.

I introduce students to the idea of parsing in my Programming Languages course, and ask students to write a few very small parsers to handle simple languages. I've been thinking for a while that I should do more in this area, and reading Martin's article has me itching to redesign the latter part of my course to allow more work with parsing and parsers. Another possibility is to use parsing as the content of some of our early programming exercises, when students are nominally learning to program in a functional style. Students can certainly apply the programming techniques they are learning to translate simple data formats into more abstract forms. This might help them to begin to see that parsing is an idea broader than just general-purpose programming languages. It might ease their transition a couple of weeks later to the idea of syntax-as-data structure and allow us to do some simple work with DSLs.

I have tried the idea of parsing at its simplest with relative novices as early as CS 1. When I taught a media computation CS1, one of the last programming assignments was to write a program to read a file of simple graphics commands and produce the desired graphical image. The idea was to bypass Java's verbose syntax for doing simple AWT/Swing graphics and allow non-programmers (artists) to make images. I asked students to implement a couple of simple commands, such as "draw line", and create at least one command of their own. I expected all of their graphics languages to be "straight-line", with no control or data structures, but that didn't mean that the resulting DSLs were not useful. They were just simple. A couple of students did really interesting work, creating very high-level commands for swirls and splashes. These students wrote methods to interpret those command using control and data structures that their "programmers" didn't have to know about.

Any change to my Programming Languages needs to be "footprint-neutral", to use Shriram Krishnamurthi's phrase. Anything I add has to fit in my fifteen-week course and either displace existing material or work in parallel. Shriram used this phrase in the broader context of rejiggering the teaching of teaching of programming languages within the core ACM curriculum, which a recent SIGPLAN-sponsored workshop tried to do. After having just read Martin's article on parsing, I was eager to refresh my memory on where parsing fits into the core curriculum proposal. Parsing falls under knowledge unit PL3, Language Translation, which has two hours allotted to it in the core. (An optional knowledge unit on Language Translation Systems includes more.) Interestingly, the working group Shriram reports on recommends cutting those two hours to zero, on the grounds that the current coverage is too superficial, and using those hours to build up a 10-hour unit on Functional Programming. That's a worthy goal, though I haven't hard a chance to think deeply about the proposal yet.

Working with constrained resources sometimes requires making tough choices. I know that Shriram and the people with whom he worked think that parsing is a worthwhile topic for CS students to know, so perhaps they have in mind something like what I suggested above: piggybacking some coverage of parsing on top of the coverage of functional programming. In any case, I think I'll work on finding more ways for my Programming Languages students to engage parsing and domain-specific languages.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 28, 2008 1:14 PM

Off to Visit Google

I'm preparing for a quick visit to the Google campus tomorrow and Friday. This is my first trip to the Google campus, and I have to admit that I'm looking forward to it. To this wide-eyed Midwestern computer scientist, it feels as if I am visiting Camelot.

The occasion of my trip is a "roadshow summit" co-sponsored by the Computer Science Teachers Association and SIGCSE, and hosted by Google. The CSTA is a unit of the ACM "that supports and promotes the teaching of computer science and other computing disciplines" in K-12 schools. The goal of the workshop is:

... to bring together faculty and students who are currently offering, or planning to develop, outreach "road shows" to local K-12 schools. Our goal is to improve the quality and number of college and university-supported careers and equity outreach programs by helping to develop a community that will share research, expertise, and best practices, and create shared resources.

My selfish goal in wanting to attend the workshop initially was to steal lots of good ideas with more experience and creativity than I. My contribution will be to share what we have done in our department, especially over the last semester. I asked two faculty members to develop curricula for K-12 outreach activities, in lieu of one of their usual course assignments. The curriculum materials should be useful whether we take them on the road to the schools or when we have students on campus for visits. One professor started with robotics in mind but quickly switched to some simple programming activities with the Scratch programming environment. The other worked on high-performance and parallel computing for pre-college students, an education thread he has been working in for much of this decade. I do not have a link to materials he developed specifically for our outreach efforts yet, but I can point you to LittleFe, one of his ongoing projects.

I'm curious to see what other schools have done and still plan to steal as many ideas as I can! And, while I'm looking forward to the workshop and seeing Google's campus, I am not looking forward to the fast turnaround... My flight leaves tomorrow morning; we work Thursday afternoon, Thursday evening, Friday morning, and Friday early afternoon; and then I start the sojourn back home. I'll cover a lot of miles in forty-eight hours, but I hope they prove fruitful.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 23, 2008 3:17 PM

The Split Mind of a Scientist

Someone pointed me toward a video of a talk given at Google by John Medina on his new book Brain Rules. I enjoyed the talk and will have to track down a copy of the book. Early on, he explains that the way he have designed our schools and workplaces produce the worst possible environments in which for us to learn and work. But my favorite passage came near the end, in response to the question, "Do you believe in magic?"

Hopefully I'm a nice guy, but I'm a really grumpy scientist, and in the end, I'm a reductionist. So if you can show me, [I'll believe it]. As a scientist, I have to be grumpy about everything and be able to be willing to believe anything. ... If you care what you believe, you should never be in the investigative fields -- ever. You can't care what you believe; you just have to care what's out there. And when you do that, your bandwidth is as wide as that sounds, and the rigor ... has to be as narrow as as the biggest bigot you've ever seen. Both are resident in a scientist's mind at the same time.

Yes. Unfortunately, public discourse seems to include an unusually high number of scientists are very good at the "being grumpy about everything" part and not so good at the "being able to be willing to believe anything" part. Notice that Medina said "be able to be willing to believe", not "be willing to believe". I think that some people are less able to be willing to believe something they don't already believe, which makes them not especially good candidates to be scientists.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 20, 2008 12:47 PM

Cognitive Surplus and the Future of Programming

the sitcom All in the Family

I grew up on the sitcom of the 1970s and 1980s. As kids, we watched almost everything we saw in reruns, whether from the '60s or the '70s, but I enjoyed so many of them. By the time I got to college, I had well-thought ideas on why The Dick Van Dyke Show remains one of the best sitcoms ever, why WKRP in Cincinnati was underrated for its quality, and why All in the Family was _the_ best sitcom ever. I still hold all these biases in my heart. Of course, I didn't limit myself to sitcoms; I also loved light-action dramas, especially The Rockford Files.

Little did I know then that my TV viewing was soaking up a cognitive surplus in a time of social transition, or that it had anything in common with gin pushcarts in the streets of London at the onset of the Industrial Revolution.

Clay Shirky has published a wonderful little essay, Gin, Television, and Social Surplus that taught me these things and put much of what we see on happening on the web into the context of a changing social, cultural, and economic order. Shirky contends that, as our economy and technology evolve, a "cognitive surplus" is created. Energy that used to be spent on activities required in the old way is now freed for other purposes. But society doesn't know what to do with this surplus immediately, and so there is a transition period where the surplus is dissipated in (we hope) harmless ways.

My generation, and perhaps my parents', was part of this transition. We consumed media content produced by others. Some denigrate that era as one of mindless consumption, but I think we should not be so harsh. Shows like All in the Family and, yes, WKRP in Cincinnati often tackled issues on the fault lines of our culture and gave people a different way to be exposed to new ideas. Even more frivolous shows such as The Dick Van Dyke Show and The Rockford Files helped people relax and enjoy, and this was especially useful for those who were unprepared for the expectations of a new world.

We are now seeing the advent of the new order in which people are not relegated to consuming from the media channels of others but are empowered to create and share their own content. Much attention is given by Shirky and many, many others to the traditional media such as audio and video, and these are surely where the new generation has had its first great opportunities to shape its world. As Shirky says:

Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for.

But as I've been writing about here, lets not forget the next step: the power to create and shape the media themselves via programming. When people can write programs, they are not relegated even to using the media they have been given but are empowered to create new media, and thus to express and share ideas that may otherwise have been limited to the abstraction of words. Flickr and YouTube didn't drop from the sky; people with ideas created new channels of dissemination. The same is true of tools like Photoshop and technologies such as wikis: they are ideas turned into reality through code.

Do read Shirky's article, if you haven't already. It has me thinking about the challenge we academics face in reaching this new generation and engaging them in the power that is now available to them. Until we understand this world better, I think that we will do well to offer young people lots of options -- different ways to connect, and different paths to follow into futures that they are creating.

One thing we can learn from the democratized landscape of the web. I think, is that we are not offering one audience many choices; we are offering many audiences the one or two choices each that they need to get on board. We can do this through programming courses aimed at different audiences and through interdisciplinary major and minor programs that embed the power of computing in the context of problems and issues that matter to our students.

Let's keep around the good old CS majors as well, for those students who want to go deep creating the technology that others are using to create media and content -- just as we can use the new technologies and media channels to keep great old sitcoms available for geezers like me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

May 19, 2008 3:55 PM

"Rebooting Computing" Summit

I still have a couple of entries to make on SIGCSE 2008 happenings. If I don't hurry up, it will be time for reports from PLoP or OOPSLA! But I do have a bit of related good news to post...

I've received an invitation to Peter Denning's "Rebooting Computing" summit, which I first mentioned when covering Denning's talk at SIGCSE. The summit is scheduled for January 2009 and is part of Denning's NSF-funded Resparking Innovation in Computing Education project. This will be a chance to spend a few days with others thinking about this issue to outline concrete steps that we all might take to make change. I've written about this issue frequently here, most recently in the form of studio-based computing, project-based and problem-based learning, and programming for non-CS folks like computing into their own work. (Like scientists and artists. I'm excited about this chance.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 15, 2008 9:15 AM

Being There

the film Being There

I sometimes talk about lecture (say, here) as not being the optimal way for students to learn. That doesn't mean that I don't think it lecture has value at all. I still lecture, though I prefer to punctuate my disquisition with occasional problem breaks, during which students try out some idea. Without those breaks, the active learning they afford, and the feedback they can give students about where they stand, I sometimes wonder how much value being in class with me for seventy-five minutes has.

It turns out that there is probably value even in "just listening". Mark Guzdial recently described work by a psychology grad student that explains the relationship between learning and reading text, hearing narration, and viewing images. Most people learn more efficiently when they hear an explanation while looking at text, code, or diagrams. If they read the same explanation while looking at the text, code, or diagrams, it will take them longer to learn the material to the same depth.

So, coming to class and hearing a good lecture can be a good investment of time. It jump-starts the brain. Of course, the student still needs to read and work through problems at home later, too. Reading and solving problems give the mind an opportunity to rehearse and to process material more deeply. The result of listening to lecture followed by intense study can be a powerful form of learning.

I encourage students to take advantage of all their modalities. Augmenting lecture with opportunities for practice and feedback gives them a strong combination of learning styles in class. Then, as often as I can, I provide students with written notes that contain both my explanations and the in-class exercises we did. This allows them to recall their in-class experience as much as possible. On the occasion when students really must miss class, they can get a flavor of what happened in class, but reading the notes is a poor substitute for experiencing the class live. Then, I assign readings from a text or other sources that supplements the material with cover in class with a different presentation. Finally, I ask students to do a significant amount of project work, which gives them the chance to learn how to do while exercising the knowledge of the material in ways that make connections in their minds. I hope that this multi-faceted approach maximizes student opportunities to learn deeply and come to appreciate what they learn.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 13, 2008 9:15 AM

Solid and Relevant

I notice a common rhetorical device in many academic arguments. It goes like this. One person makes a claim and offers some evidence. Often, the claim involves doing something new or seeing something in a new way. The next person rebuts the argument with a claim that the old way of doing or seeing things is more "fundamental" -- it is the foundation on which other ways of doing and seeing are built. Oftentimes, the rebuttal comes with no particular supporting evidence, with the claimant relying on many in the discussion to accept the claim prima facie. We might call this The Fundamental Imperative.

This device is standard issue in the CS curriculum discussions about object-oriented programming and structured programming in first-year courses. I recently noticed its use on the SIGCSE mailing list, in a discussion of what mathematics courses should be required as part of a CS major. After several folks observed that calculus was being de-emphasized in some CS majors, in favor of more discrete mathematics, one frequent poster declared:

(In a word, computer science is no longer to be considered a hard science.)

If we know [the applicants'] school well we may decide to treat them as having solid and relevant math backgrounds, but we will no longer automatically make that assumption.

Often, the conversation ends there; folks don't want to argue against what is accepted as basic, fundamental, good, and true. But someone in this thread had the courage to call out the emperor:

If you want good physicists, then hire people who have calculus. If you want good computer scientists, then hire people who have discrete structures, theory of computation, and program verification.

I don't believe that people who are doing computer science are not doing "hard science" just because it is not physics. The world is bigger than that.

...

You say "solid and relevant" when you really should be saying "relevant". The math that CS majors take is solid. It may not be immediately relevant to problems [at your company]. That doesn't mean it is not "solid" or "hard science".

I sent this poster a private "thank you". For some reason, people who drop the The Fundamental Imperative into an argument seem to think that it is true absolutely, regardless of context. Sure, there may be students who would benefit from learning to program using a "back to the basics" approach, and there may be CS students for whom calculus will be an essential skill in their professional toolkits. But that's probably not true of all students, and it may well be that the world has changed enough that most students would benefit from different preparation.

"The Fundamental Imperative" is a nice formal name for this technique, but I tend to think of it as "if it was good enough for me...", because so often it comes down to old fogies like me projecting our experience onto the future. Both parties in such discussions would do well not to fall victim to their own storytelling.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

May 09, 2008 8:03 AM

Verdict Is In On One OOPSLA Submission

The verdict is in on the paper we wrote at ChiliPLoP and submitted to Onward!: rejected. (We are still waiting to hear back on our Educators' Symposium submission.) The reviews of our Onward! paper were mostly on mark, both on surface features (e.g., our list of references was weak) and on the deeper ideas we offer (e.g., questions about the history of studio approaches, and questions about how the costs will scale). We knew that this submission was risky; our time was simply too short to afford enough iterations and legwork to produce a good enough paper for Onward!.

I found it interesting that the most negative reviewer recommended the paper for acceptance. This reviewer was clearly engaged by the idea of our paper and ended up writing the most thorough, thoughtful review, challenging many of our assumptions along the way. I'd love to have the chance to engage this person in conversation at the conference. For now, I'll have to settle for pointing out some of the more colorful and interesting bits of the review.

In at least one regard, this reviewer holds the traditional view about university education. When it comes to the "significant body of knowledge that is more or less standard and that everyone in the field should acquire at some point in time", "the current lecture plus problem sets approach is a substantially more efficient and thorough way to do this."

Agreed. But isn't it more efficient to give the students a book to read? A full prof or even a TA standing in a big room is an expensive way to demonstrate standard bodies of knowledge. Lecture made more sense when books and other written material were scarce and expensive. Most evidence on learning is that lecture is actually much less effective than we professors (and the students who do well in lecture courses) tend to think.

The reviewer does offer one alternative to lecture: "setting up a competition based on mastery of these skills". Actually, this approach is consistent with the spirit of our paper's studio-based, apprenticeship-based, and project-based. Small teams working to improve their skills in order to win a competition could well inhabit the studio. Our paper tended to overemphasize the softer collaboration of an idyllic large-scale team.

This comment fascinated me:

Another issue is that this approach, in comparison with standard approaches, emphasizes work over thinking. In comparison with doing, for example, graph theory or computational complexity proofs, software development has a much lower ratio of thought to work. An undergraduate education should maximize this ratio.

Because I write a blog called Knowing and Doing, you might imagine that I think highly of the interplay between working and thinking. The reviewer has a point: an education centered on projects in a studio must be certain to engage students with the deep theoretical material of the discipline, because it is that material which provides the foundation for everything we do and which enables us to do and create new things. I am skeptical of the notion that an undergrad education should maximize the ratio of thinking to doing, because thinking unfettered by doing tends to drift off into an ether of unreality. However, I do agree that we must try to achieve an appropriate balance between thinking and doing, and that a project-based approach will tend to list toward doing.

One comment by the reviewer reveals that he or she is a researcher, not a practitioner:

In my undergraduate education I tried to avoid any course that involved significant software development (once I had obtained a basic mastery of programming). I believe this is generally appropriate for undergraduates.

Imagine the product of an English department saying, "In my undergraduate education I tried to avoid any course that involved significant composition (once I had obtained a basic mastery of grammar and syntax). I believe this is generally appropriate for undergraduates." I doubt this person would make much of a writer. He or she might be well prepared, though, to teach lit-crit theory at a university.

Most of my students go into industry, and I encourage them to take as many courses as they can in which they will build serious pieces of software with intellectual content. The mixture of thinking and doing stretches them and keeps them honest.

An education system that produces both practitioners and theoreticians must walk a strange line. One of the goals of our paper was to argue that a studio approach could do a better job of producing both researchers and practitioners than our current system, which often seems to do only a middling job by trying to cater to both audiences.

I agree wholeheartedly, though, with this observation:

A great strength of the American system is that it keeps people's options open until very late, maximizing the ability of society to recognize and obtain the benefits of placing able people in positions where they can be maximally productive. In my view this is worth the lack of focus.

My colleagues and I need to sharpen our focus so that we can communicate more effectively the notion that a system based on apprenticeship and projects in a studio can, in fact, help learners develop as researchers and as practitioners better than a traditional classroom approach.

The reviewer's closing comment expresses rather starkly the challenge we face in advocating a new approach to undergraduate education:

In summary, the paper advocates a return to an archaic system that was abandoned in the sciences for good reason, namely the inefficiency and ineffectiveness of the advocated system in transmitting the required basic foundational information to people entering the field. The write-up itself reflects naive assumptions about the group and individual dynamics that are required to make the approach succeed. I would support some of the proposed activities as part of an undergraduate education, but not as the primary approach.

The fact that so many university educators and graduates believe our current system exists in its current form because it is more efficient and effective than the alternatives -- and that it was designed intentionally for these reasons -- is a substantial cultural obstacle to any reform. Such is the challenge. We owe this reviewer our gratitude for laying out the issues so well.

In closing, I can't resist quoting one last passage from this review, for my friends in the other sciences:

The problem with putting students with no mastery of the basics into an apprenticeship position is that, at least in computer science, they are largely useless. (This is less true in sciences such as biology and chemistry, which involve shallower ideas and more menial activities. But even in these sciences, it is more efficient to teach students the basics outside of an apprenticeship situation.)

The serious truth behind this comment is the one that explains why building an effective computer science research program around undergraduates can be so difficult. The jocular truth behind it is that, well, CS is just plain deeper and harder! (I'll duck now.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

May 08, 2008 6:42 PM

Old Teaching Wisdom

Not from me, but from The Common School Journal, an early-1800s primer on teaching:

Make no effort to simplify language.

Also: Treat all students as equals, with expectation that all participate and learn. Teach where each student lacks.

I think I have a more detailed essay on this topic in me, in terms of how we teach programming and software development. But it will have to wait for another day. In any case, my agedness seems to be growing asymptotically faster than my wisdom.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 06, 2008 4:40 PM

Optimizing Education

Brian Marick lamented recently that his daughter's homework probably wasn't affecting her future in the same way that some of his school experiences affected his. I've had that feeling, too, but sometimes wonder whether (1) my memory is good enough to draw such conclusions and (2) my daughters will remember key experiences from their school days anyway. After teaching for all these years I am sometimes surprised by what former students remember from their time in my courses, and how those memories affect them.

Brian's mention of New Math elicited some interesting comments. Kevin Lawrence hit on a point that has been on my mind in two contexts lately:

A big decision point in education is whether you are optimizing for people who will go on to be very good at a subject or for people who find it difficult.

In the context of university CS curricula, I often field complaints from colleagues here and everywhere about how the use of (graphics | games | anything post-1980 | non-scientific applications) in CS courses is dumbing down of our curriculum. These folks claim that we are spending too much time catering to folks who won't succeed in the discipline, or at least excel, and that at the same time we drive away the folks who would be good at CS but dislike the "softness" of the new approach.

In the context of reaching out to pre-university students, to show folks cool and glitzy things that they might do in computer science, I sometimes hear the same sort of thing. Be careful, folks say, not to popularize the science too much. We might mislead students into thinking that CS is not serious, or that it is easy.

I fully agree that we don't want to mislead middle schoolers or CS majors about the content or rigor of our discipline, or to give the impression that we cannot do serious and important work. But physics students and math geeks are not the only folks who can or should use computing. They are most definitely not the only folks who can make vital contributions to the discipline. (We can even learn from people who quote "King Lear".)

By not reaching out to students with different views and interests, we do computer science a disservice. Once they are attracted to the discipline and excited to learn, we can teach them all about rigor and science and math. Some of those folks won't succeed in CS, but then again neither do some of the folks who come in with the more traditional "geeky" interests.

If this topic interests you, follow the trail from Brian's blog to two blog entries by Kevin Lawrence, one old and one new. Both are worth a read. (I always knew there was a really good reason to enable comments on my blog -- Alan Kay might drop by!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 03, 2008 10:10 PM

Another Thesis Defense

I may not be a web guy, but some of my students are -- and very good ones. Back in December, I wrote about one of my students, Sergei Golitsinski, defending an MA thesis in Communications, which used computing to elucidate a problem in that discipline. For that study, he wrote tools that allowed him to trace the threads of influence in a prominent blog-driven controversy.

Sergei finally defended his MS thesis in computer science yesterday. Its title -- "Specification and Automatic Code Generation of the Data Layer for Data-Intensive Web-Based Applications" -- sounds like the usual thesis title, but as is often the case the idea behind it is really quite accessible. This thesis shows how you can use knowledge about your web application to generate much of the code you need for your site.

I like this work for several reasons. First, it was all about finding patterns in real applications and using them to inform software development. Second, it focused on how to use domain knowledge to get leverage from the patterns. Third, it used standard language-processing ideas to create a modeling language and then use models written in it to generate code. This thesis demonstrates how several areas of computer science -- database, information storage and retrieval, and programming languages among them -- can work together to help us write programs to do work for us. I also like it because Sergei applied his ideas to his own professional work and took a critical look at what the outcome means for his own practice.

Listening to the defense, I had two favorite phrases. The first was recursive weakness. He used this term in reference to weak entities in a database that are themselves parents to weak entities. But it brought to mind so many images for the functional programmer in me. (I'm almost certainly recursively weak myself, but where is the base case?) The second arose arose while discussing alternative approaches to a particular problem. Referring to one, he said trivial approach; non-trivial implementation. It occurred to me that so many ideas fall into this category, and part of understanding your domain well is recognizing them. Sometimes we need to avoid their black holes; other times, we need their challenges. Another big part of becoming a master is knowing which path to choose once you have recognized them.

Sergei is a master, and soon he will have a CS degree that says so. But like all masters, he has much to learn. When I wrote about his previous defense, his plan was up in the air but pointing toward applying CS in the world of communications. Since then, he has accepted admission to a Ph.D. program in communications at the University of Maryland, where he hopes to be in the vanguard of a new discipline he calls computational communications. I look forward to watching his progress.

You can read his CS thesis on-line, and all of the code used in his study will be, too, soon.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 01, 2008 7:11 PM

Some Lessons from the Ruby Iteration

I am not a web guy. If I intend to teach languages (PHP) or frameworks (Rails) with the web as their natural home, I need to do a lot more practice myself. It's too easy to know how to do something and still not know it well enough to teach it well. Unexpected results under time pressure create too much trouble.

MySQL, no. PostgreSQL, yes. This, twenty years after cutting my database teeth on Ingres.

Ruby's dynamic features are so, so nice. Still, I occasionally find myself wishing for Smalltalk. First love never dies.

Fifteen hours of instruction -- 5 weeks at 3 hours per week -- is plenty of time to teach most or all of the ideas in a language like bash, PHP, or Ruby. But the instructor still needs to select specific examples and parts of the class library carefully. It's too easy to start down a path of "Now look at this [class, method, primitive]..."

When I succeeded in selecting carefully, I suffered from persistent omitter's resource. "But I wish I could have covered that... Sometimes that is what students wanted to see. But most of the time they can figure that out. What they want is some insight. What insight could I have shared had I covered that instead of this?

If you want to know what students want, ask them. Easy to say, hard to do unless I slow down occasionally to reflect.

Practice, practice, practice. That's where students learn. It's also where students who don't learn don't learn.

Oh, and professor: That "Practice, practice, practice" thing -- it applies to you, too. You'll remember just how much fun programming can be.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 11, 2008 4:44 PM

SIGCSE Day 3 -- CS Past, Present, and Future

[A transcript of the SIGCSE 2008 conference: Table of Contents]

Of the 20 greatest engineering achievements of the 20th century, two lie within computing: computers (#8) and the Internet (#13). Those are broad categories defined around general-purpose tools that have affected the lives of almost every person and the practice of almost every job.

In 2004, human beings harvested 10 quintillion grains of rice. In 2004, humans beings fabricated 10 quintillion transistors. 10 quintillion is a big number: 10,000,000,000,000,000,000.

Ed Lazowska

Ed Lazowska of the University of Washington opened his Saturday luncheon talk at SIGCSE with these facts, as a way to illustrate the broad effect that our discipline has had on the world and the magnitude of the discipline today. He followed by putting the computational power available today into historical context. The amount of computational power that was available in the mainstream in the 1950s is roughly equivalent to an electronic greeting card today. Jump forward to the Apollo mission to the moon, and that computation power is now available in a Furby. Lazowska didn't give sources for these claims or data to substantiate them, but sound reasonable to me within an order of magnitude.

The title of Lazowska's talk was "Computer Science: Past, Present, and Future", and it was intended to send conference attendees home energized about our discipline. He energized folks with cool facts about computer science's growth and effect. Then he looked to the future, at some of the challenges and some of the steps being taken to address them.

One of the active steps being taken within computing is the Computing Community Consortium a joint venture of the National Science Foundation and the Computing Research Association, whose mission is to "supports the computing research community in creating compelling research visions and the mechanisms to realize these visions". According to Lazowska, the CCC hopes to inspire "audacious and inspiring research" while at the same time articulating visions of the discipline to the rest of the world. Lazowska is one of the leaders of the group. The group's twin goals are both worth the attention of our discipline's biggest thinkers.

As I listened to Lazowska describe the CCC's initiatives, I was reminded of our discipline's revolutionary effect on other disciplines and industries. Lazowska reported that two or two and a half of the 20th century's greatest engineering results were computing, but take a look at the rest of the list. Over the last half century, computers and the Internet have played an increasingly important role in many of these greatest achievements, from embedded computers in automobiles, airplanes, and spacecraft to the software that has opened new horizons in radio and television, telephones, health technologies, and most of the top 20.

Now take a look at the Grand Challenges for Engineering in the 21st Century, which Lazowska pointed us to. Many of these challenges depend crucially upon our discipline. Here are seven:

  • secure cyberspace
  • enhance virtual reality
  • advance personalized learning
  • engineer the tools of scientific discovery
  • advance health informatics
  • reverse-engineer the brain
  • engineer better medicines

But imagine doing any of the other seven without involving computing in an intimate way!

I've written a few times about how science has come to be a computational endeavor. Lazowska gave an example that I reported from as part of the next generation of science: databases. A database makes it possible to answer questions that you think of next year, not just the ones you thought of five years ago, when you wrote your proposal to NSF and when you later defined the format of your flat text file. He illustrated his idea with examples of projects at the Ocean Observatories Initiative, and the Quality of Life Technology Center. He also mentioned the idea of prosthetics as the "future of interfaces", which is a natural research and entrepreneurial opportunity for CS students. You may recall having read about this entrepreneurial connection in this blog way back!

For his part, Lazowska suggested advancing personalized learning as an area in which computing could have an immeasurable effect. Adaptive one-on-one tutoring is something that could reach an enormous unserved population and help develop the human capital that could revolutionize the world. This is actually the area into which I was evolving back when I was doing AI research, intelligent tutoring systems. I remain immensely interested in the area and what it could mean for the world. Many folks are uncomfortable with the idea of "computers teaching our children", but I think it's simply a part of the evolution of communication that computer science embodies. The book is a means of educating, communicating, and sharing information, but it is a one-track medium. The computer is a multiple-track medium, a way to deliver interactive and dynamic content to a wide audience. A "dynabook"... I wonder if anyone has been promoting this idea for say, oh, thirty years?

Fear of computers playing a human-like role in human interaction is nothing new. It reminds me of another story Lazowska told, from Time Magazine's article on the computer as the 1982 Machine of the Year. The article mentions CADUCEUS, one of the medical expert systems that was at the forefront of AI's focus on intelligent systems in the '70s and '80s. Here's the best passage:

... while it is possible that a family doctor would recognize 4,000 different symptoms, CADUCEUS is more likely to see patterns in what patients report and can then suggest a diagnosis. The process may sound dehumanized, but in one hospital where the computer specializes in peptic ulcers, a survey of patients showed that they found the machine "more friendly, polite, relaxing and comprehensible" than the average physician.

There are days when I am certain that we can create an adaptive tutoring system that is more relaxing and comprehensible than I am as a teacher, and probably friendlier and politer to boot.

Lazowska closed with an exhortation that computer scientists adopt the stance of the myth buster in trying to educate the general population, whether myths about programming (e.g., "Programming is a solitary activity"), employment ("Computing jobs will all go overseas."), or intrinsic joy ("There are no challenges left.") He certainly gave his audience plenty of raw material for busting one of the myths about the discipline not being interesting: "Computer science lacks opportunities to change the world." Not only do we change the world directly in the form of things like the Internet; these days, when almost anyone changes the world, they do so by using computing!

Lazowska's talk was perhaps too long, trying to pack more information into an hour than we could comfortably digest. But it was a good way to close out SIGCSE, given that one of its explicit themes seemed to engaging the world and that the buzz everywhere I went at the conference was about how we need to reach out more and communicate more effectively.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 07, 2008 8:41 PM

When Having No Class Is Okay

Recently, both Lance Fortnow and Michael Mitzenmacher wrote entries on how often a prof can miss a class during the semester. This is an issue for any instructor who has a professional life. Between conferences to attend and professional service duties, there will always be a conflict some time.

I have a standing solution for myself, developed through many years of teaching, going to conferences, reviewing for NSF, and serving on program committees. I teach a Tuesday/Thursday schedule in a fifteen-week semester, so a full set of class meetings is thirty. Every semester, I just plan for twenty-eight.

In the fall, I have standing plans to attend OOPSLA and, until a couple of years ago, PLoP. In the spring come SIGCSE, ChiliPLoP, or both. I can usually cover 28 sessions both semesters with a little calendar help. Until the 2007-2008, we always had two days during Thanksgiving week, which gave my courses a 29th meeting day. The spring has Spring Break. When my conference schedule falls just wrong and leaves me a day short, I will ask someone to guest lecture.

My students don't seem to mind. I usually leave them with a good project to work on while I'm gone, sometimes larger than the usual project given that they have more time to spend on it. In the end, what students learn is less about what I do in class than about what they do with the course material, and a good-sized project is usually well worth the time alloted. When I get back, we can debrief the project and, when appropriate, discuss what I learned while I was away. Later, I can fold what I learned into future courses, which makes the two class days missed an investment in the experience I can offer.

This semester I faced an unusual choice. Instead of one 15-week course, I am teaching three 5-week courses. My away time, for SIGCSE, all fell during one of the 5-week sessions. 28 out of 30 seems reasonable, but 8 out of 10 did not. So I arranged to meet my students for a couple of "make-up sessions". We held one the day before I left for Portland. After we had completed sessions 8 and 9 after break, we decided that 9 out of 10 had been enough, and we called it a wrap. I was willing to do a tenth session if students were interested, but they seemed ready to move on, so we did.

The choices we face at primarily undergraduate "teaching university" are probably different from those faced at bigger, research school. First, I suspect that some if not all of Lance's and Michael's teaching is done in graduate classes. Grad students are a different audience, one perhaps better able to use time away from class productively while still learning new material. Second, at the bigger schools, teaching a class for undergrads often means having one or more graduate TAs to help. These folks are often more than capable of pinch-hitting for an extra absence or two during the semester, with no apparent loss in quality to the students. (If you believe some of the stereotypes about research-oriented faculty, then you might think that the students could be better off with a TA filling in. But I think that stereotype is overblown and often just wrong.)

Another option available to us these days is videocasting. One of my colleagues who travels a lot in-semester sometimes records a lecture for his students in a classroom that supports showing the professor and the projected image in the video. This takes time, if only because there is a tendency for an instructor to want not to leave blemishes in a videocast recorded for posterity -- even little glitches that are normal in any in-person presentation. I've not tried this yet, but I might one day soon when the conditions are right. Done well, this could be better than even a well-prepared set of lecture notes and questions.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 05, 2008 11:40 AM

One Observation from Short Iterations

A while back, I wondered out load how I might be able to improve my "topics in languages" courses over the course of the semester, with them being three 5-week iterations instead of the usual 15-week course. The languages in the three courses have been different -- bash, PHP, and now Ruby -- which influences how and when I teach what, but all of the courses are about scripting, so there is a common mindset. This has made it possible, for example, to reuse several in-class examples and homework exercises, as a way for students participating in more than one of the iterations to compare and contrast how the languages work.

I have noticed one unexpected phenomenon from the vantage point of closer-than-usual iterations: How I teach a language I know is different from how I teach a language I don't know. Or perhaps I should say, how I want to teach a language I know is different from how I want to teach a language I don't know.

Going into the bash section, I knew a fair amount of shell scripting already and felt a desire to delve into the areas that I didn't know as well. But the shell was new enough to most of the students, and different enough from the languages they knew, that I was able comfortably in my mind to organize the course around the basic principles of the Unix Way, especially pipes. Using a good secondary text, Classic Shell Scripting, helped, too.

Then came PHP, about which I knew relatively little. I found myself paying close attention to low-level syntactic issues as I myself learned the language well. "Look at this cool thing I just learned about variables..." was a typical expression in class. Some of the examples I used were rather un-PHP-like as I explored the boundaries of the language.

I am now teaching Ruby, a language I know and like pretty well. I find myself wanting to jump passed the low-level stuff like variables and classes right to the application level, where students can see Ruby in action. This makes sense for me, since I know most of that low-level stuff cold, but perhaps not so much for a student seeing Ruby for the first time. Fortunately, many of the ideas in Ruby are similar enough to the ones they have seen in the previous scripting languages and in other programming languages. Ruby also is pretty easy to read, as was PHP, which makes code approachable. Still, I am pushing on myself to be sure that the applications I show them progress in a reasonable way from simpler language features to more complex, so that students can grow in their understanding smoothly. This week, I evolved a simple diff script from Everyday Scripting with Ruby as our first example and wrote a script for finding popular pages in a server log based on Tim Bray's chapter in Beautiful Code.

I'll have to ask the students who have been all three sections if they noticed differences in how I approached the languages and what, if any, difference it made to them.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 03, 2008 4:39 PM

Astrachan's Law for a New Generation?

Owen is reputed to have said something like "Don't give as a programming assignment something the student could just as easily do by hand." (I am still doing penance, even though Lent ended two weeks ago.) This has been dubbed Astrachan's Law, perhaps by Nick Parlante. In the linked paper, Parlante says that showmanship is the key to the Law, that

A trivial bit of code is fine for the introductory in-lecture example, but such simplicity can take the fun out of an assignment. As jaded old programmers, it's too easy to forget the magical quality that software can have, especially when it's churning out an unimaginable result. Astrachan's Law reminds us to do a little showing off with our computation. A program with impressive output is more fun to work on.

I think of this Astrachan's Law in a particular way. First, I think that it reaches beyond showmanship: Not only do students have less fun working on trivial programs, they don't think that trivial programs are worth doing at all -- which means they may not practice enough or at all. Second, I most often think of Astrachan's Law as talking about data. When we ask students to convert Fahrenheit to Celsius, or to sum ten numbers entered at the keyboard, we waste the value of a program on something that can be done faster with a calculator or -- gasp! -- a pencil and paper. Even if students want to know the answer to our trivial assignment, they won't see a need to master Java syntax to find it. You don't have to go all the way to data-intensive computing, but we really should use data sets that matter.

Yesterday, I encountered what might be a variant or extension of Astrachan's Law.

John Zelle of Wartburg College gave a seminar for our department on how to do virtual reality "on a shoestring" -- for $2000 or less. He demonstrated some of his equipment, some of the software he and his students have written, and some of the programs written by students in his classes. His presentation impressed me immensely. The quality of the experience produced by a couple of standard projects, a couple of polarizing filters, and a dollar pair of paper 3D glasses was remarkable. On top of that, John and his students wrote much of the code driving the VR, including the VR-savvy presentation software.

Toward the end of his talk, John was saying something about the quality of the VR and student motivation. He commented that it was hard to motivate many students when it came to 3D animation and filmmaking these days because (I paraphrase) "they grow up accustomed to Pixar, and nothing we do can approach that quality". In response to another question, he said that a particular something they had done in class had been quite successful, at least in part because it was something students could not have done with off-the-shelf software.

These comments made me think about how, in the typical media computation programming course, students spend a lot of time writing code to imitate what programs such as Photoshop and Audacity do. To me, this seems empowering: the idea that a freshman can write code for a common Photoshop filter in a few lines of Java or Python, at passable quality, tells me how powerful being able to write programs makes us.

But maybe to my students, Photoshop filters have been done, so that problem is solved and not worthy of being done again. Like so much of computing, such programs are so much a part of the background noise of their lives that learning how to make them work is as appealing to them as making a ball-point pen is to people of my age. I'd hope that some CS-leaning students do want to learn such trivialities, on the way to learning more and pushing the boundaries, but there may not be enough folks of that bent any more.

On only one day's thought, this is merely a conjecture in search of supporting evidence. I'd love to here what you think, whether pro, con, or other.

I do have some anecdotal experience that is consistent in part with my conjecture, in the world of 2D graphics. When we first started teaching Java in a third-semester object-oriented programming course, some of the faculty were excited by what we could do graphically in that course. It was so much more interesting than some of our other courses! But many students yawned. Even back in 1997 or 1998, college students came to us having experienced graphics much cooler than what they could do in a first Java course. Over time, fewer and fewer students found the examples knock-out compelling; the graphics became just another example.

If this holds, I suppose that we might view it as a new law, but it seems to me a natural extension of Astrachan's Law, a corollary, if you will, that applies the basic idea into the realm of application, rather than data.

My working title for this conjecture is the Pixar Effect, from the Zelle comment that crystallized it in my mind. However, I am open to someone else dubbing it the Wallingford Conjecture or the Wallingford Corollary. My humility is at battle with my ego.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

April 01, 2008 5:45 PM

Ruby Tuesday

Is anybody home? After a flurry of writing from SIGCSE, I returned home to family time and plenty of work at the office. The result has been one entry in ten days. I look forward to finishing up my SIGCSE reports, but they appear to lie forward a bit, as the next week or so are busy. I have a few new topics in the hopper waiting for a few minutes to write as well.

One bit of good news is that part of my busy-ness this week and next is launching the third iteration of my language topics course. We've done bash and PHP and are now moving on to Ruby, one of my favorite languages. Shell scripting is great, but its tools are too limited to make writing bigger programs fun. PHP was better than I expected, but in the end it is really about building web sites, not writing more general programs. (And after a few weeks of using the language, PHP's warts started to grate on me.)

Ruby is... sublime. It isn't perfect, of course, but even its idiosyncrasies seem to get out of my way when I am deep in code. I looked up the definition of 'sublime', as I sometimes do when I use a word which is outside my daily working vocabulary or is misused enough in conversation that I worry about misusing it myself. The first set of definitions have a subtlety reminiscent of Ruby. To "vaporize and then condense right back again" sounds just like Ruby getting out of my way, only for me to find that I've just written a substantial program in a few lines. (My favorite, though, is "well-meaning ineptitude that rises to empyreal absurdity"!)

This is my first time to teach Ruby formally in a course. I hope to use this new course beginning as a prompt to write a few entries on Ruby and what teaching it is like.

There are many wonderful resources for learning about and programming in Ruby. I've suggested that my students use the pickaxe book as a primary reference, even if they use the first edition, a complete version of which is available on-line. In today's class, though, I used a simple evolutionary example from Brian Marick's book Everyday Scripting with Ruby. I hesitated to use this book as the student's primary source because it was originally written for tester's without any programming background, and my course is for upper-division CS majors with several languages under their belts. But Brian works through several examples in a way that I find compelling, and I think I can base a few solid sessions on one or two of them.

This book makes me wonder how easy it would be to re-target a book from an audience like non-programming testers to an audience of scripting-savvy programmers who want to learn Ruby's particular yumminess. I know that in the course of writing the book Brian generalized his target audience from testers to the union of three different audiences (testers, business analysts, and programmers). Maybe after I've lived with the book and an audience of student programmers I'll have a better sense of how well the re-targeting worked. If it works for my class, then I'll be inclined to adopt it for the next offering of this course.

Anyway, today we evolved a script for diffing to directories of files for a tester. I liked the flow of development and the simple script that resulted. Now we will move on to explore language features and their use in greater depth. One example I hope to work through soon, perhaps in conjunction with Ruby's regular expressions, is "Finding Things", Tim Bray's chapter in Beautiful Code.

Oh, and I must say that this is the first time that one of my courses has a theme song -- and a fine theme song, indeed. Now, if only someone would create a new programming language called "Angie", I would be in heaven.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 26, 2008 7:27 AM

Data-Intensive Computing and CS Education

An article in the March 2008 issue of Computing Research News describes a relatively new partnership among NSF, Google, and IBM to help the academic computing "explore innovative research and education ideas in data-intensive computing". They define data-intensive computing a new paradigm in which the size of the data dominates all other performance features. Google's database of the web is one example, but so are terabytes and petabytes of scientific data collected from satellites and earth-bound sensors. On the hardware side of the equation, we need to understand better how to assemble clusters of computers to operate on the data and how to network them effectively. Just as important is the need to develop programming abstractions, languages, and tools that are powerful enough so that we mortals can grasp and solve problems at this massive scale. Google's Map-Reduce algorithm (an idea adapted from the functional programming world) is just a start in this direction.

This notion of data-intensive computing came up in two of the plenary addresses at the recent SIGCSE conference. Not surprisingly, one was the talk by Google's Marissa Mayer, who encouraged CS educators to think about how we can help our students prepare to work within this paradigm. The second was the banquet address by Ed Lazowska, the chair of Washington's Computer Science department. Lazowska's focus was more on the need for research into the hardware and software issues that undergird computing on massive data sets. (My notes on Lazowska's talk are still in the works.)

This recurring theme is one of the reasons that our Hot Topic group at ChiliPLoP began its work on the assembly and preparation of large data sets for use in early programming courses. What counts as "large" for a freshman surely differs from what counts as "large" for Google, but we can certainly begin to develop a sense of scale in our students' minds as they write code and see the consequences of their algorithms and implementations. Students already experience large data in their lives, with 160 GB video iPods in their pockets. Having them compute on such large sets should be a natural step.

The Computing Research News also has an announcement of a meeting of the Big-Data Computing Study Group, which is holding a one-day Data-Intensive Computing Symposium today in Sunnyvale. I don't know how much of this symposium will report new research results and how much will share background among the players, in order to forge working relationships. I hope that someone writes up the results of the symposium for the rest of us...

Though our ChiliPLoP group ended up working on a different project this year, I expect that several of us will continue with the idea, and it may even be a theme for us at a future ChiliPLoP. The project that we worked on instead -- designing a radically different undergraduate computer science degree program -- has some currency, though, too. In this same issue of the CRN, CRA board chair Dan Reed talks about the importance of innovation in computing and computing education:

As we debate the possible effects of an economic downturn, it is even more important that we articulate -- clearly and forcefully -- the importance of computing innovation and education as economic engines.

[... T]he CRA has created a new computing education committee ... whose charge is to think broadly about the future of computing education. We cannot continue the infinite addition of layers to the computing curriculum onion that was defined in the 1970s. I believe we need to rethink some of our fundamental assumptions about computing education approaches and content.

Rethink fundamental assumptions and start from a fresh point of view is just what we proposed. We'll have to share our work with Reed and the CRA.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 20, 2008 4:21 PM

SIGCSE Day 2 -- Plenary Address by Marissa Mayer

[A transcript of the SIGCSE 2008 conference: Table of Contents]

Marissa Mayer

The second day of the conference opened with the keynote address by Google's VP of Search Products and User Experience, Marissa Mayer. She was one of the early hires as the company expanded beyond the founders, and from her talk it's clear that she has been involved with a lot of different products in her time there. She is also something the discipline of computer science could use more of, a young woman in a highly-visible technical and leadership and roles. Mayer is a great ambassador for CS as it seeks to expand the number of female high-school and college students.

This talk was called Innovation, Design, and Simplicity at Google and illustrated some of the ways that Google encourages creativity in its employees and gets leverage from their ideas and energy. I'll drop some of her themes into this report, though I imagine that the stories I relate in between may not always sync up. Such is the price of a fast-moving talk and five days of receding memory.

Creativity loves constraint.

I have written on this topic a few times, notably in the context of patterns, and it is a mantra for Google, whose home page remains among the least adorned on the web. Mayer said that she likes to protect its minimalist feel even when others would like to jazz it up. The constraints of a simple page force the company to be more creative in how it presents results. I suspect it also played a role in Google developing its cute practice of customizing the company logo in honor of holidays and other special events. Mayer said that minimalism may be a guide now, but it was not necessarily a reason for simplicity in the beginning. Co-founder Sergey Brin created the first Google home page, and he famously said, "I don't do HTML."

Mayer has a strong background both in CS and in CS education, having worked with the undergrad education folks at Stanford as a TA while an undergrad. (She said that it was Eric Roberts who first recommended Google to her, though at the time he could not remember the company's name!) One of her first acts as an employee was to run a user study on doing search from the Google homepage. She said that when users first sat down and brought up the page, they just sat there. And sat there. They were "waiting for the rest of it"! Already, users of the web were already accustomed to fancy pages and lots of graphics and text. She said Google added its copyright tag line at the bottom of the page to serve as punctutation, to tell the user that that's all there was.

Search is a fixed focus at Google, not a fancy user interface. Having a simple UI helps to harness the company's creativity.

Work on big problems, things users do every day.

Work on things that are easy to explain and understand.

Mayer described in some detail the path that a user's query follows from her browser to Google and back again with search results. Much of that story was as expected, though I was surprised by the fact that there are load balancers to balance the load on the load balancers that hand off queries to processors! Though I might have thought that another level of indirection would slow the process it down, indeed it is necessary in order to ensure that the system doesn't slow down. Even with the web server and the ad server and the mixers, users generally see their results in about 0.2 seconds. How is that for a real-time constraint to encourage technical creativity?

Search is the #2 task performed on the web. E-mail is (still) #1. Though some talking heads have begun to say that search is a mature business in need of consolidation, Google believes that search is just getting started. We know so little about how to do it well, how to meet the user's needs, and how to uncover untapped needs. Mayer mentioned a problem familiar to this old AI guy: determining the meaning of the words used in a query so that they can serve pages that match the user's conceptual intent. She used a nice example that I'll probably borrow the next time I teach AI. When a user asks for an "overhead view", he almost always wants to see a picture!

This points in the direction of another open research area, universal search. The age in which users want to search for text pages, images, videos, etc., as separate entities has likely passed. That partitioning is a technical issue, not a user's issue. The search companies will have to find a way to mix in links to all kinds of media when they serve results. For Google, this also means figuring out how to maintain or boost ad revenue when doing so.

Ideas come from everywhere.

Mayer gave a few examples. One possible source is office hours, which are usually thought of as an academic concept but which she has found useful in the corporate world. She said that the idea for Froogle walked through her office door one day with the scientist who had it.

Another source is experiments. Mayer told a longer story about Gmail. The company was testing it in-house and began to discuss how they could make many from it. She suggested the industry-standard model of giving a small amount of storage for free and then charging for more. This might well have worked, because Google's cost structure would allow it to offer much larger amounts at both pricing levels. But a guy named Paul -- he may be famous, but I don't know his last name -- suggested advertising. Mayer pulled back much as she expected users to do; do people really want Google to read their e-mail and serve ads? Won't that creep them out?

She left the office late that night believing that the discussion was on hold. She came back to work the next morning to find that Paul had implemented an experimental version in a few hours. She was skeptical, but the idea won her over when the system began to serve ads that were perfectly on-spot. Some folks still prefer to read e-mail without ads, but the history of Gmail has shown just how successful the ad model can be.

The insight here goes beyond e-mail. The search ad data base can be used on any paged on the web. This is huge... Search pages account for about 5% of the pages served on the web. Now Google knew that they could reach the other 95%. How's that for a business model?

To me, the intellectual lesson is this:

If you have an idea, try it out.

This is a power that computer programmers have. It is one of the reasons that I want everyone to be able to program, if only a little bit. If you have an idea, you ought to be able to try it out.

Not every idea will lead to a Google, but you never know which ones will.

Google Books started as a simple idea, too. A person, a scanner, and a book. Oh, and a metronome -- Mayer said that when she was scanning pages she would get out of rhythm with the scanner and end up photocopying her thumbs. Adding a metronome to the system smoothed the process out.

... "You're smart. We're hiring." worked remarkably well attracting job candidates. We programmers have big egos! Google is one of the companies that has made it okay again to talk about hiring smart people, not just an army of competent folks armed with a software development process, and giving them the resources they need to do big things.

Innovation, not instant perfection.

Google is also famous for not buying into the hype of agile software development. But that doesn't mean that Google doesn't encourage a lot of agile practices. For example, at the product level, it has long practiced a "start simple, then grow" philosophy.

Mayer contrasted two kinds of programmers, castle builders and nightly builders. Companies are like that, too. Apple -- at least to outside appearances -- is a castle-building company. Every once in a while, Steve Jobs et al. go off for a few years, and then come back with an iPod or an iPhone. This is great if you can do it, but only a few companies can make it work. Google is more of a nightly builder. Mayer offered Google News as a prime example -- it went through 64 iterations before it reached its current state. Building nightly and learning from each iteration is often a safer approach, and even companies that are "only good" can make it work. Sometimes, great companies are the result.

Data is a-political.

Mayer didn't mean Republican versus Democrat here, rather that well-collected data provide a more objective basis for making decisions than the preferences of a manager or the guesses of a designer or programmer. Designing an experiment that will distinguish the characteristics you are in interested, running it, and analyzing the data dispassionately are a reliable way to make good decisions. Especially when a leader's intuition is wrong, such as Mayer's was on Gmail advertising.

She gave a small plug for using Google Trends as a way to observe patterns in search behavior when they might give an idea about a question of interest. Query volume may not not change much, but the content of the queries does.

Users, users, users.

What if some users want more than the minimalist white front page offered by Google? In response to requests from a relatively small minority of users -- and the insistent voices of a few Google designers -- iGoogle is an experiment in offering a more feature-filled portal experience. How well will it play? As is often the case, the data will tell the story.

Give license to dream.

Mayer spent some time talking about the fruits of Google's well-known policy of 20% Time, whereby every employee is expected to spend 1/5 of his or her time working on projects of personal interest. While Google is most famous for this policy these days, like most other ideas it isn't new. At ChiliPLoP this week, Richard Gabriel reported that Eric Schmidt took this idea to Google with him when he left Sun, and Pam Rostal reported that 3M had a similar policy many years ago.

But Google has rightly earned its reputation for the fruits of 20% Time. Google News. Google Scholar. Google Alerts. Orkut. Froogle Wireless. Much of Google Labs. Mayer said that 50% of Google's new products come from these projects, which sounds like a big gain in productivity, not the loss of productivity that skeptics expect.

I have to think that the success Google has had with this policy is tied pretty strongly with the quality of its employees, though. This is not meant to diss the rest of us regular guys, but you have to have good ideas and the talent to carry them out in order for this to work well. That said, these projects all resulted from the passions of individual developers, and we all have passions. We just need the confidence to believe in our passions, and a willingness to do the work necessary to implement them.

Most of the rest of Mayer's talk was a litany of these projects, which one wag in the audience called a long ad for the goodness of Google. I wasn't so cynical, but I did eventually tire of the list. One fact that stuck with me was the description of just how physical the bits of Google Earth are. She described how each image of the Earth's surface needs to be photographed at three or four different elevations, which requires three or four planes passing over every region. Then there are the cars driving around taking surface-level shots, and cameras mounted to take fixed-location shots. A lot of physical equipment is at work -- and a lot of money.

Share the data.

This was the last thematic slogan Mayer showed, though based on the rest of the talk I might rephrase it as the less pithy "Share everything you can, especially the data." Much of Google's success seems based in a pervasive corporate culture of sharing. This extends beyond data to ideas. It also extends beyond Google campus walls to include users.

The data-sharing talk led Mayer to an Easter Egg she could leave us. If you check Google's Language Tools page, you will see Bork, Bork, Bork, a language spoken (only?) by the Swedish chef on the Muppets. Nonetheless, the Bork, Bork, Bork page gets a million hits a year (or was it a day?). Google programmers aren't the only ones having fun, I guess.

Mayer closed with suggestions for computer science educators. How might we prepare students better to work in the current world of computing? Most of her recommendations are things we have heard before: build and use applications, work on large projects and in teams, work with legacy code, understand and test at a large scale, and finally pay more attention to reliability and robustness. Two of her suggestions, though, are ones we don't hear as often and link back to key points in her talk: work with and understand messy data, and understand how to use statistics to analyze the data you collect.

After the talk, folks in the audience asked a few questions. One asked how Google conducts user studies. Mayer described how they can analyze data of live users by modding the key in user cookies to select 1/10 or 1/1000 of the user population, give those users a different experience, and then look at characteristics such as click rate and time spent on page by these users compared to the control group.

The best question was in fact a suggestion for Google. Throughout her talk, Mayer referred repeatedly to "Google engineers", the folks who come up with neat ideas, implement them in code, test them, and then play a big role in selling them to the public. The questioner pointed out that most of those "engineers" are graduates of computer science programs, including herself, Sergey Brin, and Larry Page. He then offered that Google could do a lot to improve the public perception of our discipline if it referred to its employees as computer scientists.

I think this suggestion caught Mayer a little off-guard, which surprised me. But I hope that she and the rest of Google's leadership will take it to heart. In a time when it is true both that we need more computer science students and that public perception of CS as a discipline is down, we should be trumpeting the very cool stuff that computer scientists are doing at places like Google.

All in all, I enjoyed Mayer's talk quite a bit. We should try to create a similarly creativity-friendly environment for our students and ourselves. (And maybe work at a place like Google every so often!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 19, 2008 12:40 AM

A Change in Direction at ChiliPLoP

As I mentioned in my last SIGCSE entry, I have moved from carefree Portland to Carefree, Arizona, for ChiliPLoP 2008. The elementary patterns group spent yesterday, its first, working on the idea of integrating large datasets into the CS curriculum. After a few years of working on specific examples, both stand-alone and running, we started this year thinking about how CS students can work on real problems from many different domains. In the sciences, that often means larger data sets, but more important it means authentic data sets, and data sets that inspire students to go deeper. On the pedagogical side of the ledger, much of the challenge lies in finding and configuring data sets so that they can used reliably and without unnecessary overhead placed on the adopting instructor.

This morning, we volunteered to listen to a presentation by the other hot topic group on its work from yesterday: a "green field" thought experiment designing an undergrad CS program outside of any constraints from the existing university structure. This group consists of Dave West and Pam Rostal, who presented an earlier version of this work at the OOPSLA 2005 Educators' Symposium, and Richard Gabriel, who brings to the discussion not only an academic background in CS and a career in computer science research and industry but also an MFA in poetry. Perhaps the key motivation for their hot topic is that most CS grads go on to be professional software developers or CS researchers, and that our current way of educating them doesn't do an ideal job of preparing grads for either career path.

Their proposal is much bigger than I can report here. They started by describing a three-dimensional characterization of different kinds of CS professionals, including provocative and non-traditional labels as "creative builder", "imaginative researcher", and "ordinologist". The core of the proposal is the sort of competency-based curriculum that West and Rostal talked about at OOPSLA, but I might also describe it as studio-based, apprenticeship-based, and project-based. One of their more novel ideas is that students would learn everything they need for a liberal arts, undergraduate computer science education through their software projects -- including history, English, writing, math, and social science. For example, students might study the mathematics underlying a theorem prover while building a inference engine, study a period of history in order to build a zoomable timeline on the web for an instructional web site, or build a Second Life for a whole world in ancient Rome.

In the course of our discussion, the devil's advocates in the room raised several challenging issues, most of which the presenters had anticipated. For example, how do the instructors (or mentors, as they called them) balance the busy work involved in, say, the students implementing some Second Life chunk with the content the students need to learn? Or how does the instructional environment ensure that students learn the intellectual process of, say, history, and not just impose a computer scientist's worldview on history? Anticipating these concerns does not mean that they have answers, only that they know the issues exist and will have to be addressed at some point. But this isn't the time for self-censorship... When trying to create something unlike anything we see around us, the bigger challenge is trying to let the mind imagine the new thing without prior restraint from the imperfect implementations we already know.

We all thought that this thought experiment was worth carrying forward, which is where the change of direction comes in. While our group will continue to work on the dataset idea from yesterday, we decided in the short term to throw our energies into the wild idea for reinventing CS education. The result will be two proposals to OOPSLA 2008: one an activity at the Educators' Symposium, and the other an Onward! paper. This will be my first time as part of a proposal to the Onward! track, which is both a cool feeling and an intimidating prospect. We'll see what happens.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 18, 2008 1:08 PM

SIGCSE Day 3 -- Expectation and Outcome

[A transcript of the SIGCSE 2008 conference: Table of Contents]

This was a tale of two sessions, two different expectations, and two results.

First came a session on Nifty Objects. After seeing this one, my sarcastic side envisioned a one-word report: Not. But then I cam to my senses and realized that I wasn't being fair. A couple of the presentations were okay. The problem was in using the word "nifty" in the title of the session. As I alluded in my post on this year's Nifty Assignments session, the word Nifty creates a brand expectation that even the originators of the panel have had a hard time living up to. I tried to recreate the nifty assignments magic at the OOPSLA'04 Educators' Symposium, but the assignments weren't quite nifty enough or the presentations quite dynamic enough to succeed wildly.

So to be fair, all I can really say about this panel is that expectations for its niftiness surpassed what it delivered. At this point, I'd say that using the word "nifty" in a session title is a recipe for overextended expectations.

Then came a session called "It Seemed Like a Good Idea at the Time". I commented positively on last year's panel of the same name, but I wasn't sure what to expect this year. First, I honestly don't remember many details from last year's, other than the guy whose intro students crashed the university network with a flood of Java-generated e-mail. Second, this year's panel didn't have a bunch of "big names" with a reputation for making a session a hit. So my expectations were not all that high. But, as I wrote last year, I like the idea of seeing an idea fail, so I gave it a chance. The session was a lot of fun, better than expected. Every presenter told a good story. After Dan Garcia went, I worried about the folks to follow, because he killed. The folks who followed more than held their own.

So this panel exceeded my expectations. Unfortunately, I can't tell you much about the content of the presentations. What made the session enjoyable was the storytelling, and I can't do the stories justice by retelling them here. I suppose that I should be able to give you a list of the lessons learned -- red flags to watch for in some big ideas, but I can't even do that. I haven't forgotten all the details yet (though ChiliPloP is starting to fill up my limited memory)! I recall Dan Garcia talking about a bad experience with giving an exam with no time limits and Caroline Eastman, I think, talking about just how hard it turned out to be to alphabetize a list of names in the face of international standards.

The individual stories were about very specific instances. The one general lesson I draw from the panel these two years is that any idea, however well thought out, can go awry in the most unexpected ways. Be prepared for that to happen, be prepared to adapt in real-time, and be prepared to take advantage of the experience to help students learn what they can. And loosen up -- if your assignment or exam goes unexpectedly wrong, you probably haven't scarred your students for life or harmed them irreparably. They may even have learned something valuable they wouldn't otherwise have learned.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 17, 2008 4:18 PM

SIGCSE Day 1 -- Innovating our Image

[A transcript of the SIGCSE 2008 conference: Table of Contents]

Owen Astrachan and Peter Denning

I ended the first day by attending a two-man "panel" of NSF's inaugural CISE Distinguished Educational Fellows, Owen Astrachan and Peter Denning. The CDEF program gives these two an opportunity to work on wild ideas having the potential "to transform undergraduate computing education on a national scale, to meet the challenges and opportunities of a world where computing is essential to U.S. leadership and economic competitiveness across all sectors of society." A second-order effect of these fellowships is to have a soapbox (Denning) or bully pulpit (Astrachan) to push their ideas out into the world and, more important, to encourage the rest of the community to think about how to transform computing education, whether on the CDEF projects or on their own.

Owen went first. He opened by saying that the CDEF call for proposals had asked for "untested ideas", and if he couldn't propose one of those, well... I've mentioned his Problem Based Learning learning project here before, as part of an ongoing discussion of teaching that is built around real problems, projects, and meaningful context. Owen's talk described some of the motivation for his project and the perceived benefits. I'll try to report his talk faithfully, but I suspect that some of my own thoughts will creep in.

When we talk to the world as we talk among ourselves -- about choices of programming language, IDE, operating systems, ... -- we tell the world that tools are the coin of our realm. When an engineer comes to us and asks for a course using MATLAB, we may be 100% technically correct to respond "Python is better than MATLAB. Here's why..." -- but we send the wrong message, a damaging message, which makes it the wrong response.

We can engage those folks better if we talk to them about the problems they need help solving. In many ways, that is how computing got its start, and it is time for us again to look outside our discipline for problems to solve, ideas to explore, and motivation to advance the discipline. Increasingly, outside our discipline may well be the place for us to look for learners, as fewer folks express interest in computing for computing's sake and as more non-computing look for ways to integrate computing into their own work. (Like scientists and even artists.)

That is one motivation for the PBL project. Another was the school Owen's children attend, where all learning is project-based. Students work on "real" problems that interest them, accumulating knowledge, skill, and experience as they work on increasingly challenging and open-ended projects. Problems have also driven certain kinds of scientific professional education at the university level for many decades.

For the first time in any of his talks I've seen, Owen took some time to distinguish problem-based learning from project-based learning. I didn't catch most of the specific differences, but Owen pointed us all to the book Facilitating Problem-Based Learning by Maggi Savin-Baden for its discussion of the differences. This is, of course, of great interest to me, as I have been thinking a lot in the last year or more about the role project-based courses can play in undergraduate CS. Now I know where to look for more to think about.

As a part of eating his own dog food, Owen is trying to ratchet up the levelof dialogue in his own courses this year by developing assignments that are based on problems, not implementation techniques. One concrete manifestation of this change is shorter write-ups for assignments, which lay out only the problem to be solved and not so much of his own thinking about how students should think about the problem. He likened this to giving his students a life jacket, not a straitjacket. I struggle occasionally with a desire to tie my students' thinking up with my own, and so wish him well.

Where do we find problems for our CS majors to work on? Drawing explicitly from other disciplines is one fruitful way, and it helps CS students see how computing matters in the world. We can also draw from applications that young people see and use everyday, which has the potential to reach an even broader audience and requires less "backstory". This is something the elementary patterns folks have worked on at ChiliPLoP in recent years, for example 2005. (Ironically, I am typing this in my room at the Spirit in the Desert Retreat Center, as I prepare for ChiliPLoP 2008 to begin in the morning. Outside my window is no longer rainy Portland but a remarkably cold Arizona desert.)

Owen said we only need to be alert to the possibilities. Take a look at TinyURL -- there are several projects and days of lecture there. Google the phrase dodger ball; why do we get those results? You can talk about a lot of computer science just by trying to reach an answer.

After telling us more about his NSF-funded project, Owen closed with some uplifting words. He hopes to build a community around the ideas of problem-based learning that will benefit from the energy and efforts of us all. Optimism is essential. Revolutionizing how we teach computing, and how others see computing, is a daunting task, but we can only solve problems if we try.

Denning took the stage next. He has long been driven by an interest in the great principles of computing, both as a way to understand our discipline better and as a way to communicate our discipline to others more effectively. His CDEF project focuses on the different "voices" of computing, the different ways that the world hear people in our discipline speak. In many ways, they correspond to the different hats that computing people wear in their professional lives -- or perhaps our different personalities in a collective dissociative identity disorder.

Denning identifies seven voices of computing: the programmer, the computational thinker, the user, the mathematician, the engineer, the scientist, and the "catalog". That last one was a mystery to us all until he explained it, when it became our greatest fear... The catalog voice speaks to students and outsiders in terms of the typical university course descriptions. These descriptions partition our discipline into artificial little chunks of wholly uninteresting text.

What makes these voices authentic? Denning answered the question in terms of concepts and practices. To set up his answer, he discussed three levels in the development of a person's understanding of a technology, from mystical ("it works") to operational (concrete models) to formal (abstractions). Our courses often present formal abstractions of an idea before students have had a chance to develop solid concrete models yet. We often speak in terms of formal abstraction to our colleagues from other disciplines. We would be more effective if instead we worked on their problems with them and helped them create concrete results that they can see and appreciate.

One advantage of this is that the computing folks are speaking the language of the problem, rather than the abstract language of algorithms and theory. Another is that it grounds the conversation in practices, rather than concepts. Academics like concepts, because they are clean and orderly, more than practices, which are messy and often admit no clean description. Denning asserts that voices are more authentic when grounded in practices, and that computing hurts itself whenever it grounds its speech in concepts.

His project also aims to create a community of people around his ideas. He mentioned something like a "Rebooting Computing" summit that will bring together folks interested in his CDEF vision and, more broadly, in inspiring a return to the magic and beauty of computing. Let's see what happens.

I heard several common threads running through Astrachan's and Denning's presentations. One is that we need to be more careful about how we talk about our discipline. Early on, Denning's said that we should talk about what computing is and how we do it, and not how we think about things. We academics may care about that, but no one else does. Later, Owen said that we should talk about computational doing, not computational thinking. These both relate to the intersection of their projects, where solving real problems in practice is the focus.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 15, 2008 1:26 PM

Not an Example of Problem-Based Learning

Professor: Students, your next assignment is to implement an operating system in Scheme.

Student: But professor, I don't know Scheme.

Professor: That's your problem.

(At least Owen laughed.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 15, 2008 1:06 PM

SIGCSE Day 1 -- Nifty Assignments

[A transcript of the SIGCSE 2008 conference: Table of Contents]

Last year's Nifty Assignments at SIGCSE merited only a retro-chic reference in my blog. I thought that was unusual until I looked back and noticed that I mentioned it not at all in 2006 or 2005. I guess that's not too surprising. The first few years of the now-annual panel were full of excitement, as many of the best-known folks in the SIGCSE shared some of their best assignments. These folks are also among the better teachers you'll find, and even when their assignments were only okay their presentations alone were worth seeing. As the years have passed, the excitement of the assignments and presentations alike have waned, even as the panel remains a must-see for many at the conference.

Still, each year I seem to find some nugget to take home with me. This year's panel actually offered a couple of neat little assignments, and one good idea.

The good idea came late in Cay Horstmann's presentation: have students build a small application using an open-source framework, and then critique the design of the framework. In courses where we want students to discuss design, too often we only read code and talk in the abstract. The best way to "get" a design is to live with it for a while. One small app may not be enough, but it's a start.

One of the niftier assignments was a twist on an old standard, Huffman coding, that made it accessible to first-semester students. Creating a Huffman code is usually reserved for a course in data structures and algorithms because the technique involves growing a tree of character sets, and the idea of trees -- let alone implementing them -- is considered by most people beyond CS1 students. McGuire and Murtagh take advantage of a nifty fact: If you are interested only in determining how many bits you need to encode a sequence, not in doing the encoding itself, then all you need to do is execute the loop that replaces the two smallest values in the collection with their sum. A simple linear structure suffices, and the code comes to resemble the selection part of a selection sort.

This assignment gives you a way to talk to students about data compression and how different techniques give different compression rates. The other twist in this assignment is that McGuire and Murtagh apply the coding to images, not text strings. This is something I tried in my media computation CS1 in the fall of 2006, with both sounds and images. I liked the result and will probably try something like it the next time I teach CS1.

Catching Plagiarists image

The other assignment that grabbed my attention was Baker Franke's Catching Plagiarists. This assignment isn't all that much different from many of the common text processing tasks folks use in CS1, but it is framed in a way that students find irresistible: detecting plagiarism. I used to tell my intro students, "Copy and paste is your life", which always drew a few laughs. They knew just how true it was. With universal web access and the growth of Wikipedia, I think my claim is more true now than it was then, to the point that students think nothing of liberal re-use of others' words. This assignment gets students to think about just how easy it is to detect certain kinds of copying.

So, putting the task in a context that is meaningful and way relevant ups the niftiness factor by a notch or two. The fact that it can use a data sets both small and large means that the students can run head-first into the idea that some algorithms use much more time or space than others, and that their program may not have enough of one or both of these resources to finish the job on an input that they consider tiny -- a MB or two. (Heck, that's not even enough space for a decent song.)

Another thing I liked about this assignment is that it is, by most standards, underspecified. You could tell students this little: "Write a program to find the common word sequences among a set of documents. Your input will be a set of plain-text documents and a number n, and your output will be a display showing the number of n-word sequences each document has in common with every other document in the set." Franke presented his assignment as requiring little prep, with a simple problem statement of this sort, so I was a little disappointed to see that his assignment write-up is four pages with a lot more verbiage. I think it would be neat to simply tell the students the motivating story and then give them the five-line assignment. After students have had a chance to think about their approach, I'd like to talk to them about possibilities and help them think through the design.

Then again, I have to cut Franke some slack. He is a high school instructor, so his students are even younger than mine. I'm encouraged to think that high school students anywhere are working on such a cool problem.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 14, 2008 3:36 PM

SIGCSE Day 1 -- The Mystery Problem

[A transcript of the SIGCSE 2008 conference: Table of Contents]

During the second session of the day, I bounced among several sessions, but the highlight was Stuart Reges talking about an interesting little problem -- not a problem he had, but a problem that appeared on the advanced placement exam in CS taken by high-school students. This problem stood out in an analysis of the data drawn from student performance on the 1988. Yes, that is twenty years ago!

Donald Knuth is famous for saying that perhaps only 2% of people have the sort of mind needed to be a computer science. He actually wrote, "I conclude that roughly 2% of all people 'think algorithmically', in the sense that they can reason rapidly about algorithmic processes." But is it that those 2% have that the rest do not? In his talk, Stuart read another passage from Knuth that speculates about one possible separator:

The other missing concept that seems to separate mathematicians from computer scientists is related to the 'assignment operation' :=, which changes values of quantities. More precisely, I would say that the missing concept is the dynamic notion of the state of a process. 'How did I get here? What is true now? What should happen next if m going to get to the end?' Changing states of affairs, or snapshots of a computation, seem to be intimately related to algorithms and algorithmic thinking.

In studying student performance on the 1988 AP exam, Reges found that performance on a small set of "powerhouse questions" was inordinately predictive of success on the exam as a whole, and of those five one stood out as most predictive. This question offers evidence in support of Knuth's speculation about "getting" assignment. Here it is:

If b is a Boolean variable, then the statement b := (b = false) has what effect?
  1. It causes a compile-time error message.
  2. It causes a run-time error message.
  3. It causes b to have value false regardless of its value just before the statement was executed.
  4. It always changes the value of b.
  5. It changes the value of b if and only if b had value true just before the statement was executed.

What a fun little question -- so simple, but with layers. It involves assignment, but also sequencing of operations, because the temporary result of (b = false) must be computed before assigning to b. (Do you know the answer?)

You can read about the correlations and much more about the analysis in Stuart's paper and slides, which are available on this resource page The full analysis may be interesting only to a subset of us, perhaps as few as 2%... I really enjoyed seeing the data and reading about how Stuart thought through the data. But here I'd like to think more about what this implies for how students reason, and how we teach.

This notion of state, the ability to take and manipulate "snapshots of a computation", does seem to be one of the necessary capabilities of students who succeed in computer science. With his speculation, Knuth is suggesting that how people think about computation matters. Stuart also quoted one of my heroes, Robert Floyd, who upon hearing an earlier version of this talk commented:

These questions seem to test whether a student has a model of computation; whether they can play computer in their head.

This is something folks in CS education think a lot about, but unfortunately we in then trenches teaching intro CS often don't apply what we know consistently or consciously. Whether we think about it or not, or whether we act on it or not, students almost certainly bring a naive computational model with them when they enter our courses. In the world of math and CS, we might refer to this as a naive operational semantics. How do variables work? What happens when an if statement executes? Or a loop? Or an assignment statement? I have read a few papers that investigate novice thinking about these issues, but I must confess to not having a very firm sense of what CS education researchers have studied and what they have learned.

I do have a sense that the physics education community has a more complete understanding of their students' naive (mis)understanding of the physical world and how to engage students there. (Unfortunately, doing that doesn't always help.)

Someone in the crowd suggested that we teach a specific operational semantics to students in CS1, as some authors do. That's a good idea complicated by the kinds of languages and programming models that we often teach. I think that we can do better just by remembering that our students have naive computational model in their heads and trying to find out how they understand variables, selection, loops, and assignments statements.

Stuart gave a great example of how he does this. He now sometimes asks his students this question:

public static int mystery(int n) {
    int x = 0;
    while (n % 2 == 0) {
        // Point A
        n = n / 2;
        x++;
        // Point B
    }
    // Point C
    return x;
}

Is (n % 2 == 0) always true, never true, or sometimes true/sometimes false at points A, B and C?

Stuart reported that many of his students think (n % 2 == 0) is always true at Point B because it's inside the loop, and the while loop condition guarantees that the condition is always true inside the loop. One wonders what these students think about infinite loops.

If we understand what students think in such basic situations, we are much better positioned to help students debug their programs -- and to write programs in a more reliable way. One way to help students learn and enjoy more is to give them tools to succeed. And recognizing when and how what students already think incorrectly is a prerequisite to that.

Plus, these are multiple-choice questions, which will make some students and professors even happier!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 14, 2008 10:55 AM

SIGCSE Day 1 -- Randy Pausch and Alice

[A transcript of the SIGCSE 2008 conference: Table of Contents]

Last September, the CS education community was abuzz with the sad news that Randy Pausch, creator of the Alice programming environment, had suffered a recurrence of his cancer and that his condition was terminal. Pausch approached his condition directly and with verve, delivering a last lecture that became an Internet phenomenon. Just google "lecture"; as of today, Pausch's is the second link returned.

Because Pausch, his team, and Alice have had such an effect on the CS education community, and not just the virtual reality community in which they started, the folks at SIGCSE gambled that his health would hold out long enough for him to accept the SIG's 2008 Award for Outstanding Contribution to Computer Science Education in person and deliver a plenary address. Alas, it did not. Pausch is relatively well but unable to travel cross country. In his stead, Dennis Cosgrove, lead project scientist on the Alice project, and Wanda Dann, one of the curriculum development leads on the project, gave a talk on the history of Pausch's work and on what's next with Alice 3.0. The theme of the talk was building bridges, from virtual reality research to cognitive psychology and then to the fine arts, and a parallel path to CS education.

I admire Cosgrove and Dann for taking on this task. It is impossible to top Pausch's now-famous last lecture, which nearly everyone has seen by now. (If you have not yet, you should. It's an inspiring 104 minutes.) I'll let the video speak for Pausch and only report some of the highlights from Cosgrove and Dann.

Pausch's work began like so many new assistant professors' work does: on the cheap. He wanted to start a virtual reality lab but didn't have the money to "do it right". So he launched a quest to do "VR on $5 a day". Alice itself began as a rapid prototyping language for VR systems and new interaction techniques. As his lab grew, Pausch realized that to do first-class VR, he needed to get into perceptual research, to learn how better to shape the user's experience.

This was the first bridge he built, to cognitive psychology. The unexpected big lesson that he learned was this: What you program is not what people see. I think the teacher in all of us recognizes this phenomenon.

Next came an internship at Disney Imagineering, a lifelong dream of his. There, he saw the power of getting artists and engineers to work together, not just in parallel on the same project. One of the big lessons he learned was that it's not easy to do. Someone has to work actively to keep artists and engineers working together, or they will separate into their own element. But the benefits of the collaboration are worth the work.

Upon his return to CMU, he designed a course called Building Virtual Worlds that became a campus phenomenon. Students came to view building their worlds as a performing art -- not from the perspective of the "user", but thinking about how an audience would respond. I think this shows that computer science students are more than just techies, and that placed in the right conditions will respond with a much broader set of interests and behaviors.

In the last phase of his work, Pausch has been working more in CS education than in VR. In his notes on this talk, he wrote, "Our quest (which we did not even realize in the beginning) was to revolutionize the way computer programming is taught." So often, we start with one goal in mind and make discoveries elsewhere. Sometimes we get lost, and sometimes we just wander in an unexpected direction. I think many folks in CS education first viewed Alice as a way to teach non-majors, but increasingly folks realize that it may have a profound effect on how we teach -- and recruit -- majors. I was glad to be pointed in the direction of Pausch's student Caitlin Kelleher, whose PhD dissertation, "Motivating Programming: Using Storytelling to Make Computer Programming Attractive to Middle School Girls" is of great interest to me. (And not just as father to two girls!)

Cosgrove wrapped up his talk with a cartoon that seems to express Pausch's Think Big outlook on life. I won't try to show you the image (who needs another pre-cease-and-desist message from the same artist?), but will describe it: Two spiders have built a web across the bottom of the playground slide. One turns to the other and says, "If we pull this off, we will eat like kings." Pausch and his team have been weaving a web of Alice, and we may well reap big benefits.

Pausch's career teaches us one more thing. To accomplish big things, you need both a strong research result, in order to convince folks your idea might work, and you need strong personal connections, in order that funders will be able to trust you with their money and resources.

Thanks, Randy.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 13, 2008 7:22 PM

Notes on SIGCSE 2008: Table of Contents

This set of entries records my experiences at SIGCSE 2008, in Portland, Oregon, March 12-16. I'll update it as I post new pieces about the conference. One of those entries will explain why my posts on SIGCSE may come more slowly than they might.

Primary entries:

Ancillary entries:


Posted by Eugene Wallingford | Permalink | Categories: Computing, Running, Software Development, Teaching and Learning

February 27, 2008 5:48 PM

We Are Not Alone

In case you think me odd in my recent interest in the idea of computer science for all students, even non-majors, check out an interview with Andries van Dam in the current issue of The Chronicle of Higher Education on-line:

Q: What do you hope to bring to computer-science education?

A: We'll try to figure out "computing in the broad sense" -- not just computer-science education, but computing education in other fields as well. What should high-school students know about computation? What should college students know about computation? I think these are all questions we're going to ask.

van Dam is a CS professor at Brown University and the current chair of the Computing Research Association's education committee. I look forward to seeing what the CRA can help the discipline accomplish in this space.

Do keep in mind when I say things like "computer science for all students", I mean this for some yet-undetermined value of "computer science". I certainly don't think that current CS curricula or even intro courses are suited for helping all university or high school students learn the power of computing. (Heck, I'm not even sure that most of our intro courses are the best way to teach our majors.)

That's one of the concerns that I have with the proposed Computing and the Arts major at Yale that I mentioned last time. It's not at all clear to me that a combination of courses from the existing CS and art majors is what is really needed to educate a new audience of intellectuals or professionals empowered to use computation in a new way. Then again, I do not know what such a new major or its courses might look like, so this experiment may be a good way to get started. But the faculty there -- and here, and on the CRA education committee, and everywhere else -- should be on the look-out for how we can best prepare an educated populace, one that is computation-savvy for a world in which computation is everywhere.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 26, 2008 6:00 PM

A Few Thoughts on Artists as Programmers

I've written so much about scientists as programmers, I'm a little disappointed that I haven't made time to write more about artists as programmers in response to Ira Greenberg's visit last month. The reason is probably two-pronged. First, I usually have less frequent interaction with artists, especially artists with this sort of mentality. This month, I did give a talk with an artistic connection to an audience of art students, but even that wasn't enough to prime the pump. That can be attributed to the second prong, which is teaching five-week courses in languages I have never taught before -- one of which, PHP, I've never even done much programming in. I've been busy preparing course materials and learning.

Before I lose all track of the artists-as-programmers thread for now, let me say a few things that I still have in waiting.

Processing is really just a Java IDE. I don't mean that in a dismissive sense; it's very useful and provides some neat tools to hide the details of Java -- including classes and the dread "public static void main" -- from programmers who don't care. But there is not all that much to it in a technical sense, which means that CS folks don't need to obsess about whether they are using it or not.

Keller McBride's color spray artwork

For example, you can do many of the same things in JES, the Jython environment created for Guzdial's media computation stuff. When I taught media computation using Erickson and Guzdial's Java materials, I had my students implement the an interpreter for the simplest of graphics languages and then asked them to show off their program with a piece of art produced with the language. One result was the image to the right, produced by freshman Keller McBride's using a program processed by his own interpreter.

During his talk, Greenberg mentioned that he had a different take on the idea of web "usability". Later I commented that I was glad he had said that, because I found that his website was a little bit funky. His response was interesting in a way that meshes with some of the things I occasionally say about computing as a new paradigm for expressing ideas. (This is not an original idea, of course; Alan Kay has been trying to help us understand this for forty years.)

Greenberg doesn't see computation only as an extension of the engineering metaphor that has defined computing in the age of electronics; he sees it as the "dawn of a new age". When we think of computation in the engineering context, issues such as usability and ergonomics become a natural focus. But in this new age, computing can mean and be something different:

Where I want my toaster to "disappear" and simply render perfectly cooked bread, I don't want that same experience when I compute--especially since I don't often have an initial goal/purpose.

He mentioned, too, that his ideas are not completely settled in this area, but I don't think that anyone has a complete handle on what the new age of computing really means. It sounds as if his ideas are as well formed as most anyone's, and I'm excited when I hear what non-CS people think in this space.

Finally, when it comes to teaching art and computer science together, some schools are already working in that direction. For example, faculty at Yale recently announced that they are putting together a major in computing and the arts. I am not sure what to think about their proposal which aims to be "rigorous" by requiring students to take existing courses in the arts and computer science. There are courses created especially for the major. That is probably a good idea for some audiences, but what about artists who don't want a full computer science-specific CS experience? Do they need the same technical depth as your average CS student? Somehow, I don't think so. A new kind of discipline may well require a new kind of major. But it's neat that someone is taking steps in this direction. We will probably learn something useful from their experience.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 24, 2008 12:48 PM

Getting Lost

While catching up on some work at the office yesterday -- a rare Saturday indeed -- I listened to Peter Turchi's OOPSLA 2007 keynote address, available from the conference podcast page. Turchi is a writer with whom conference chair Richard Gabriel studied while pursuing his MFA at Warren Wilson College. I would not put this talk in the same class as Robert Hass's OOPSLA 2005 keynote, but perhaps that has more to do with my listening to an audio recording of it and not being there in the moment. Still, I found it to be worth listening as Turchi encouraged us to "get lost" when we want to create. We usually think of getting lost as something that happens to us when we are trying to get somewhere else. That makes getting lost something we wish wouldn't happen at all. But when we get lost in a new land inside our minds, we discover something new that we could not have seen before, at least not in the same way.

As I listened, I heard three ideas that captured much of the essence of Turchi's keynote. First was that we should strive to avoid preconception. This can be tough to do, because ultimately it means that we must work without knowing what is good or bad! The notions of good and bad are themselves preconceptions. They are valuable to scientists and engineers as they polish up a solution, but they often are impediments to discovering or creating a solution in the first place.

Second was the warning that a failure to get lost is a failure of imagination. Often, when we work deeply in an area for a while, we sometimes feel as if we can't see anything new and creative because we know and understand the landscape so well. We have become "experts", which isn't always as dandy a status as it may seem. It limits what we see. In such times, we need to step off the easy path and exercise our imaginations in a new way. What must I do in order to see something new?

This leads to the third theme I pulled from Turchi's talk: getting lost takes work and preparation. When we get stuck, we have to work to imagine our way out of the rut. For the creative person, though, it's about more about getting out of a rut. The creative person needs to get lost in a new place all the time, in order to see something new. For many of us, getting lost may seem like as something that just happens, but the person who wants to be lost has to prepare to start.

Turchi mentioned Robert Louis Stevenson as someone with a particular appreciation for "the happy accident that planning can produce". But artists are not the only folks who benefit from these happy accidents or who should work to produce the conditions in which they can occur. Scientific research operates on a similar plane. I am reminded again of Robert Root-Bernstein's ideas for actively engaging the unexpected. Writers can't leave getting lost to chance, and neither can scientists.

Turchi comes from the world of writing, not the world of science. Do his ideas apply to the computer scientist's form of writing, programming? I think so. A couple of years ago, I described a structured form of getting lost called air-drop programming, which adventurous programmers use to learn a legacy code base. One can use the same idea to learn a new framework or API, or even to learn a new programming language. Cut all ties to the familiar, jump right in, and see what you learn!

What about teaching? Yes. A colleague stopped by my office late last week to describe a great day of class in which he had covered almost none of what he had planned. A student had asked a question whose answer led to another, and then another, and pretty soon the class was deep in a discussion that was as valuable, or more, than the planned activities. My colleague couldn't have planned this unexpectedly good discussion, but his and the class's work put them in a position where it could happen. Of course, unexpected exploration takes time... When will they cover all the material of the course? I suspect the students will be just fine as they make adjustments downstream this semester.

What about running? Well, of course. The topic of air-drop programming came up during a conversation about a general tourist pattern for learning a new town. Running in a new town is a great way to learn the lay of the land. Sometimes I have to work not to remember landmarks along the way, so that I can see new things on my way back to the hotel. As I wrote after a glorious morning run at ChiliPLoP three years ago, sometimes you run to get from Point A to Point B; sometimes, you should just run. That applies to your hometown, too. I once read about an elite women's runner who recommended being dropped off far from your usual running routes and working your way back home through unfamiliar streets and terrain. I've done something like this myself, though not often enough, and it is a great way to revitalize my running whenever the trails start look like the same old same old.

It seems that getting lost is a universal pattern, which made it a perfect topic for an OOPSLA keynote talk.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Running, Software Development, Teaching and Learning

February 23, 2008 3:41 PM

Door No. 2

My dean recently distributed copies of "Behind Door No. 2: A New Paradigm for Undergraduate Science Education", from the 2006 annual report of Research Corporation. It says many of the things we have all heard about revitalizing science education, summarizing some of the challenges and ideas that people have tried. The report speaks in terms of the traditional sciences, but most of what it says applies well to computer science.

I don't think I learned all that much new from this report, but it was nice to see s relatively concise summary of these issues. What enjoyed most were some of the examples and quotes from respected science researchers, such as physics Nobel laureate Carl Wiemann. One of the challenges that universities face in re-forming how they teach CS, math, and science is that research faculty are often resistant to changing how they teach or think about their classrooms. (Remember, we material to cover.) These faculty are often tenured full professors who wield significant power in the department over curriculum and program content.

At a comprehensive university such as mine, the problem can be accentuated by the fact that even the research faculty teach a full load of undergraduate courses! At the bigger research schools, there are often faculty and instructors who focus almost entirely on undergraduate instruction and especially the courses in the undergraduate core and for non-majors. The research faculty, who may not place too much confidence in "all that educational mumbo-jumbo", don't have as much contact with undergrads and non-majors.

I also enjoyed some of the passages that close the article. First, Bruce Alberts suggests that we in the universities worry about the mote in our own eye:

I used to blame all the K-12 people for everything, but I think we [in higher education] need to take a lot of responsibility. ... K-12 teachers who teach science learned it first from science courses in college. You really want to be able to start with school teachers who already understand good science teaching, ...

Leon Lederman points to the central role that science plays in the modern world:

Once upon a time the knowledge of Latin and Greek was essential to being educated, but that's no longer true. Everywhere you look in modern society in the 21st century, science plays a role that's crucial. It's hard to think of any policy decision on the national level that doesn't have some important scientific criteria that should weigh in on the decisions you make.

He probably wasn't thinking of computer science, but when I think such thoughts I surely am.

Finally, Dudley Herschbach reminds us that the need for better science education is grounded in more than just the need for economic development. We owe our students and citizens more:

So often the science education issue is put in terms of workforce needs and competitiveness. Of course, that's a factor. But for me it's even more fundamental. How can you have a democracy if you don't have literacy? Without scientific literacy, citizens don't know what to believe.... It is so sad that in the world's richest country, a country that prides itself on being a leader in science and technology, we have a large fraction of the population that might as well live in the 19th, 18th or 17th century. They aren't getting to live in the 21st century except in the superficial way of benefiting from all the gadgets. But they don't have any sense of the human adventure...

That is an interesting stance: much of our population doesn't live in the 21st century, because they don't understand the science that defines their world.

Yesterday, I represented our department at a recruitment open house on campus. One mom pulled her high-school senior over to the table where computer science and chemistry stood and asked him, "Have you considered one of these majors?" He said, "I don't like science." Too many students graduate high school feeling that way, and it is a tragedy. It's bad for the future of technology; it's bad for the future of our economy. And they are missing out on the world they live in. I tried to share the thrill, but I don't think I'll see him in class next fall.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 21, 2008 7:37 PM

Father, Forgive Me

... though I can't in good conscience say, "for I know not what I do."

1. Write a script named simple-interest.php that defines a function to compute the simple interest on an amount, given the amount, annual interest rate, and number of months. Your script should apply the function to its command-line arguments, which are those same values.

The rest of my PHP class's first homework is more reasonable, with a couple of problems repeated from the bash class's first assignment as a way for students to get a sense of trade-offs between shell programming and scripting in a more general-purpose language.

Still, I had to squeeze my eyes shut tight to hit the key that published this assignment. I keep telling myself that this is just an ice-breaking assignment for students who have never written any PHP before, or learned how to access command-line arguments, or convert strings to integers. That such a simple, context-free task is a nice way for them to succeed on their first effort. That our future assignments will be engaging, challenging, worthwhile. But... Ick.

The first time I teach a course, there always seem to be clunkers like this. Starting from scratch, relying on textbooks for inspiration, and working under time pressure all work against my goal of making everything students do in the class worth their time and energy. I suppose that problems such as this one are my opportunities to improve next time out.

My despair notwithstanding, I suspect that many students are happy enough to have at least one problem that is a gift, however uninteresting it may be. Maybe I can find solace in that while I'm working on exercises for my next problem set.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 19, 2008 5:11 PM

Do We Need Folks With CS Degrees?

Are all the open jobs in computing that we keep hearing about going unfilled?

Actually -- they're not. Companies do fill those jobs. They fill them with less expensive workers, without computing degrees, and then train them to program.

Mark Guzdial is concerned that some American CEOs and legislators are unconcerned -- "So? Where's the problem?" -- and wonders how we make the case that degrees in CS matter.

I wonder if the US would be better off if we addressed a shortage of medical doctors by starting with less expensive workers, without medical degrees, and then trained them to practice medicine? We currently do face a shortage of medical professionals willing to practice in rural and underprivileged areas.

The analogy is not a perfect one, of course. A fair amount of the software we produce in the world is life-critical, but a lot is not. But I'm not sure whether we want to live in a world where our financial, commercial, communication, educational, and entertainment systems depend on software to run, and that software is written by folks with a shallow understanding of software and computing more generally.

Maybe an analogy to the law or education is more on-point. For example, would the US would be better off if we addressed a shortage of lawyers or teachers by starting with less expensive workers, without degrees in those areas, and then trained them? A shortage of lawyers -- ha! But there is indeed a critical shortage of teachers in many disciplines looming in the near future, especially in math and science. This might lead to an interesting conversation, because many folks advocate loosening the restrictions on professional training for folks who teach in our K-12 classrooms.

I do not mean to say that folks who are trained "on the job" to write software necessarily have a shallow understanding of software or programming. Much valuable learning occurs on the job, and there are many folks who believe strongly in a craftsmanship approach to developing developers. My friend and colleague Ken Auer built his company on the model of software apprenticeship. I think that our university system should adopt more of a project-based and apprenticeship-based approach to educating software developers. But I wonder about the practicality of a system that develops all of its programmers on the job. Maybe my view is colored by self-preservation, but I think there is an important role for university computing education.

Speaking of practicality, perhaps the best way to speak to the CEOs and legislators who doubt the value of academic CS degrees is in their language of supply and productivity. First, decentralized apprenticeship programs are probably how people really became programmers, but they operate on a small scale. A university program is able to operate on a slightly larger scale, producing more folks who are ready for apprenticeship in industry sooner than industry can grow them from scratch. Second, the best-prepared folks coming out of university programs are much more productive than the folks being retrained, at least while the brightest trainees catch. That lack of productivity is at best an opportunity cost, and at worst an invitation for another company to eat your lunch.

Of course, I also think that in the future more and more programmers will be scientists and engineers who have learned how to program. I'm inclined to think that these folks and the software world will be better off being educated by folks with a deep understanding of computing. Artists, too. And not only for immediately obvious economic reasons.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 14, 2008 8:01 PM

Agile Thoughts While Preparing My Course

That is the title of a blog post that I planned to write five or six weeks ago. Here it is over a month later, and the course just ended. Well, it ended, and it begins again on Tuesday. So now I am thinking agile thoughts as I think back over my course, and still thinking agile thoughts as I prepare for the course. Let me explain.

810:151 is a different sort of course for us. We try to expose students to several different programming languages in the course of their undergraduate study. Even so, it is common for students to graduate thinking, "I wish I'd learned X." Sometimes X is a relatively new language, such as Scala or Groovy. Sometimes it's a language that is now more mainstream but has not yet made it into one of our courses, such as Ruby. Sometimes it is even a language we do emphasize in a course, such as Scheme, but in a course they didn't have an opportunity to take. We always have told students who express this wish that they should be well-equipped to learn a new language on their own, and they are. But...

While taking a full load of courses and working part-time (or taking a part-time load of courses and working full-time), it is often hard for students to set aside time to learn completely optional. People talk about "the real world" as if it is tougher than being in school, but students face a lot of competing demands for their time. Besides, isn't it often more fun to learn something from an expert who can help you learn the tricks of the trade faster and miss a few of the potholes that lie along the way?

I sometimes think of this course, which we call "Topics in Programming Languages", as a "make time" course for students who want to learn a new language, or perhaps a broader topic related to language, but who want or need the incentive that a regular course, assigned readings, and graded work provides. The support provided by the prof's guidance also is a good safety net for the less seasoned and less confident. For these folks, one of the desired outcomes is for them to realize, hey, I really can learn a language on my own.

We usually offer each section of 810:151 as a 1-credit course. The content reason is that the course has the relatively straightforward purpose of teaching a single language, without a lot of fluff. The practical purpose is that we can offer three 1-credit courses in place of a single 3-credit course. Rather than meet one hour per week for the entire semester, the course can meet 3 hours per week for 5 weeks. This works nicely for students who want to take all three, as they look and feel like a regular course. It also works nicely for students who choose to take only one or two of the courses, as they need not commit an entire semester's worth of attention to them.

This is my first semester assigned (by me) to teach this odd three-headed course. The topics this semester are Unix shell programming in bash, PHP, and Ruby.

I've been thinking of the three courses as three 5-week iterations. Though the topics of the three courses are different, they share a lot in terms of being focused on learning a language in five weeks. How much material can I cover in a course? How can students best use their time? How can I best evaluate their work and provide feedback? Teaching three iterations of a similar course in one semester is so much better for me when it comes to taking what I learn and trying to improve the next offering. With an ordinary course taught every semester, I would have to wait until next fall to begin implementing improvements; with an ordinary course three-course rotation, I would have to wait until Fall 2009!

I opted to dispense with all examinations and evaluate students solely in terms of the bash scripts they wrote. The goal of the course is for students to learn how to program in bash, so that is where I wanted the students' attention to be. One side effect of this decision is that the course is not really over yet; students will work on their final problem set in the coming week, and I'll have to grade it next Friday. The problem sets have consisted mostly in small-ish scripts that exercise the features of bash as we encounter them. We did have one larger task that students solved in three parts over the course of the semester, a processor for a Markdown-like mark-up language that produces HTML. This project scratched one of my own itches, as I like to use simple text-based, e-mail-friendly mark-up, and now I have a simple bash script that does the job!

One thing I did not do this semester that I thought I might, and which perhaps I should, is to work with them through a non-trivial shell script or two. I had thought that the fifth week would be devoted to examining and extending larger scripts, but I kept uncovering more techniques and ideas that I wanted them to see. Perhaps I could use a real script as a primary source for learning the many features of bash, instead of building their skills from the bottom up. That is how many of them have to come to know what little they know about shell scripting, by confronting a non-trivial script for building or configuring an open-source application that interests them. To be honest, though, I think that the bottom-up style that we used this semester may prepare them better for digging into a more complex script than starting with a large program first. This is one of the issues I hope to gain some insight into from student feedback on the course.

Making this "short iterations" more interesting is the fact that some students will be in all three of the iterations, but there will be a significant turnover in the class rosters. The client base evolves, but there should be enough overlap that I can get some comparative feedback as I try to implement improvements.

I tried to follow a few other agile principles as I started teaching this new prep. I tend to start each new course with a template from my past courses, from the way I organize sessions and lecture notes to the look-and-feel of the web site. This semester, I tried to maintain a YAGNI mindset: start as simple as I can, and add new elements only as I use them -- not when I think I need them tomorrow. By and large I have succeeded in this regard. My web site is bare-bones in comparison to my past sites, and lecture notes are plain text records of in-class activities and short messages to remind me and the students of what we discussed. I saved a lot of time not trying to produce attractive and complete lecture notes in HTML. Maybe some day, but this time around I just didn't need them.

One agile practice that I didn't think to encourage soon enough was unit testing. Shame on me. Some students suffered far more than I from this oversight. Many did a substandard job of testing their scripts, in part I think because they were biting off too much of a task to start. Unix pipelines are almost perfectly suited to unit testing, as one can test the heck out of each stage in isolation, growing the pipeline one stage at a time until the task is solved. The fact that each component is reading from stdin and writing to stdout means that later stages can be tested independent of the stages that precede it before we add it to the end.

For whatever reason, it didn't occur to me that there would exist an shUnit. It's too late for me to use it this semester, but I'll be sure to put phpUnit to good use in the next five weeks. And I've always known that I would use a unit testing framework such as this one for Ruby. Heck, we may even roll our own as we learn the language!

I've really enjoyed teaching a course with the Unix philosophy at the forefront of our minds. Simple components, the universal interface of plain text streams, and a mandate to make tools to work together -- the result is an amazingly "agile" programming environment. The best way to help students see the value of agile practices is to let them live in an environment where that is natural, and let them feel the difference from the programming environments in which they other times find themselves. I just hope that my course did the mindset justice.

The tool-builder philosophy that pervaded this course reminded me of this passage from Jason Marshall's Something to Say:

There's an old saying, "A good craftsman never blames his tools." Many people take this to mean "Don't make excuses," or even, "Real men don't whine when their tools break." But I take it to mean, "A good craftsperson does not abide inferior tools."

A good craftsman never blames his tools, because if his tools are blameworthy, he finds better tools. I associate this idea more directly with the pragmatic programmers than with the agile community, but it seems woven into the fabric of the agile approaches. The Agile Manifesto declares that we value "individuals and interactions over processes and tools", but I do not take this to mean that tools don't matter. I think it means that we should use tools (and processes) that empower us to focus our energy on people and interactions. We should let programs do what they do best so that we programmers can do what we do best. That's why we have unit testing frameworks, refactoring tools, automatic build tools, and the like. It's also why Unix is far more human-affirming than it is sometimes given credit for.

As I told students to close the lecture notes for this course: Don't settle for inferior tools. You are a computer programmer. Make the tools that make you better.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 07, 2008 10:17 PM

Using the Writing Metaphor in Both Directions

I recently came across a SIGCSE paper from 1991 called Integrating Writing into Computer Science Courses, by Linda Hutz Pesante, who at the time was affiliated with the Software Engineering Institute at Carnegie Mellon. This paper describes both content and technique for teaching writing within a CS program, a topic that cycles back into the CS education community's radar every few years. (CS academics know that it is important even during trough periods, but their attention is on some other, also often cyclic, attention-getter.)

What caught my attention about Pesante's paper is that she tries help software engineers to use their engineering expertise to the task of writing technical prose. One of her other publications, a video, even has the enticing title, Applying Software Engineering Skills to Writing. I so often think about applying ideas from other disciplines to programming, the thought of applying ideas from software development to another discipline sounded like a new twist.

Pesante's advice on how to teach writing reflects common practice in teaching software development:

  • Motivate students so that they know what to expect.
  • Attend to the writing process, as well as the final product.
  • Use analogies to programming, such as debugging and code reviews.
  • Have students practice, and give them feedback.

Given Pesante's affiliation with the SEI, her suggestions for what to teach about writing made a bigger impression on me. The software engineering community certainly embraces a broad range of development "methodologies" and styles but, underlying its acceptance even of iterative methods, there seems to be a strong emphasis on planning and "getting things right" the first time.

Her content advice relies on the notion that "writing and software development have something in common", from user analysis through the writing itself to evaluation. As such, a novice writer can probably learn a lot from how programmers write code. Programmers like to be creative and explore when they write, too, but they also know that thinking about the development process can add structure to a potentially unbounded activity. They use tools to help them manage their growing base of documents and to track revisions over time. That part of the engineer's mindset can come in handy for writers. For the programmer who already has that mindset, applying it to the perhaps scary of writing prose can put the inexperienced writer at ease.

Pesante enumerates a few other key content points:

  • Writing takes place in a context.
  • The writing process is neither linear nor algorithmic.
  • The writing process is iterative.
  • Correct is not the same as effective.

The middle two of these especially feel more agile than the typical software engineering discussion. I think that the agile software community's emphasis on short iterations with frequent releases of working software to the client also matches quite nicely the last of the bullets. It's all too easy to do a good job of analysis and planning, produce a piece of software that is correct by the standards of the analysis and plan, and find that it does not meet the user's needs effectively. With user feedback every few weeks, the development team has many more opportunities to ensure that software stays on a trajectory toward effectiveness.

Most people readily accept the idea that creative writing is iterative, non-linear, and exploratory. But I have heard many technical writers and other writers of non-creative prose say that their writing also has these features -- that they often do not know what they had to say, or what their ultimate product would be, until they wrote it. My experience as a writer, however limited, supports the notion that almost all writing is exploratory, even when writing something so pedestrian as a help sheet for students using an IDE or other piece of software.

As a result, I am quite open to the notion that programming -- what many view as the creating of an "engineered" artifact -- also iterative, non-linear, and exploratory. This is true, I think, not only for what we actually call "exploratory programming" but also for more ordinary programming tasks, where both the client and I think we have a pretty good handle on what the resulting programming should do and look like. Often, over the course of writing an ever-larger program, our individual and shared understanding of the program evolves. Certainly my idea of the internal shape of the program -- the structure, the components, the code itself -- changes as the program grows. So I think that there is a lot to be learned going both directions in the writing:programming metaphor.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 06, 2008 2:37 PM

Passion is Job 1

This half-hour interview with Pragmatic Dave Thomas starts off interesting enough to listen to, and about halfway through it becomes even more compelling, especially for academics. Interviewer Jim Coplien asks what advice Thomas would give to CS academics. Thomas has long been an industry trainer, and a good one. (I learned some Ruby from him fellow Prag Andy Hunt at an OOPSLA workshop in 2001.) But he has not been a CS academic since leaving graduate school a couple of decades ago. Still, his answer is marvelous:

... one thing I would say that you have to be very careful of, if you are an academic, is that you are dealing with a very delicate product in your students, and ultimately when a student gets into the industry it is not how well hey can analyze a particular function or the depth of knowledge in this particular architecture, it is their passion that drives them forward. And as an academic I think you have a responsibility not to squash that passion, I think you have to find ways of nurturing it.

I can't instill passion in someone, but I can kill someone's passion. Worse, I diminish someone's passion in small steps, in how I speak about the discipline, what I expect of them. When writing comments is more valuable than writing code, I dampen passion. When the form of a program matters more than the substance, I dampen passion.

Unfortunately, I think that our K-12 system kills the passion of many students. This is not a criticism of teachers, many of whom do wonderful, inspiring jobs under less than ideal conditions. The problem is more a product of the structure of our schools and our classrooms. At the university, we'd like to think that we begin to restore passion, and we do have more opportunities to do so. But we need to be honest with ourselves and stamp out the spirit-killing parts of our courses, curricula, and degree programs. I cannot instill passion, but I can stop killing passion. And I can help it grow.

Thomas didn't have much in the way of concrete advice for how to nurture passion, but he did say that teachers need to motivate what they teach and what they expect students to do. Yes; context matters. He also suggested that we encourage students to be well-rounded and try to attract well-rounded folks to the discipline. Yes; the more interesting ideas we have in our heads and in our classrooms, the better we can learn, and the better we can program.

As an aside, Thomas talks some about how he recently took up learning to play the piano, on the occasion of turning fifty. Early in this decade, I, too, began to study piano as an adult. In the year or so before I began writing this blog, I had let myself become too busy to practice and so fell away. I hope to make time to return to my study some day, for all the reasons that Thomas mentions and more.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 05, 2008 8:02 PM

What Steve Martin Can Teach Us

Steve Martin, on the beach

... about teaching, and about software development.

The February 2008 issue of Smithsonian Magazine contains an article called Being Funny, by comedian, writer, and actor Steve Martin, that has received a fair amount of discussion on-line already. When I read it this weekend, I was surprised by how similar some of the lessons Martin learned as he grew into a professional comedian are to lessons that software developers and teachers learn. I enjoyed being reminded of them.

I gave myself a rule [for dealing with the audience]. Never let them know I was bombing: this is funny, you just haven't gotten it yet.

This is about not showing doubt. Now, I think it's essential for an instructor to be honest -- something I wrote about a while back, in the context of comedy as well. So I don't mean that I as teacher should try to bluff my way through something I don't know or know but have botched. Martin is talking about the audience getting it, and the doubt that enters my mind when a classroom of students seem not to. I experience this occasionally when I teach a course like Programming Languages to a new group of students. Some classes don't warm to me or the material in quite the same way as others, but I know that the material I'm teaching and the basic approach I am using are sound. When this semester's crowd takes a while to catch on -- or if they are openly skeptical of the material or approach -- it's essential that I remain confident. Sure, I'll make adjustments to the presentation to account for my current students' needs, but I should remain steadfast: This is good stuff; they'll get it soon.

I'm not sure this applies as well for software developers. Often, when my users don't get it yet and I feel compelled to bull on through, I have gone beyond the requirements, at least as my users understand them. In those cases, it's usually better to say to myself "You aren't gonna need it" and simplify.

Everything was learned in practice, and the lonely road, with no critical eyes watching, was the place to dig up my boldest, or dumbest, ideas and put them onstage.

This is hard to do as a teacher. I don't get to go on the road to a hundred musty nightclubs and try out my new lecture on continuation-passing style, as Martin did with his bits. He was practicing on small stages in order to prepare for the big ones, such as The Tonight Show. It's next to impossible for me to try a lecture out on a representative audience that isn't my regular audience: my students. I can practice my research talks before a small local audience before taking them to a conference, but the number of repetitions available to me is rather small even in that scenario. For class sessions, I pretty much have to try them out live, see what happens, and feed the results back into my next performance.

Fortunately, I'm not often trying radically new ideas out in a lecture, so fewer and live repetitions may suffice. I have tried teaching methods that quite different than the norm for me and for my students, such as a Software Systems course or gen-ed capstone course with no lecture and all in-class exercises. In those scenarios, I had to follow the advice discussed above: This is going to work; they just haven't gotten it yet...

This piece of advice applies perfectly to programming. The lonely road is my office or my room at home, where I can try out every new idea that occurs to me by writing a program (or ten) and seeing how it works. No critical eyes but my eye, which I turn off. Bold ideas, dumb ideas -- I find out which are which through practice.

The consistent work enhanced my act. I learned a lesson: it was easy to be great. Every entertainer has a night when everything is clicking. These nights are accidental and statistical: like lucky cards in poker, you can count on them occurring over time. What was hard was to be good, consistently good, night after night, no matter what the circumstances.

This is so true of teaching that I have nothing more to say. Read Martin's line again.

I think this is also true of programming, and writing more generally. Every so often, a great idea comes to mind; a great turn of phrase; a cool little hack that solves the problem at hand. To be a good programmer, you need to practice, so that each time you sit down to code you can produce something of value. That kind of skill is earned by practice, and, I think, attainable by everyone.

On one of my appearances [on The Tonight Show, after [Johnny Carson] had done a solid impression of Goofy the cartoon dog, he leaned over to me during a commercial and whispered prophetically, "You'll use everything you ever knew."

When working with students, I find myself borrowing from every good teacher I've ever had, and drawing on any experience I can recall. I've borrowed standards and attitudes from one of my favorite undergrad profs, who taught me the value of meeting deadlines. I've used phrases and jokes spoken by my high school chemistry teacher, who showed us that studying a difficult subject could be fun, with the right mindset and a supportive group of classmates. Whatever works, works. Use it. Adapt it.

Likewise, this advice is true when programming. In the last few years, the notion of scrapheap programming has become quite popular. In this style, a programmer looks for old pieces of code that do part of the job at hand, adapts them, and glues them together to get the job done. But this is how all writers and programmers work, drawing on all of the old bits of code rolling around their heads. In addition to practice, you can improve as a programmer by exposing yourself to as many different programs as possible. That way, you will see the data structures, the idioms, the design patterns, and, yes, the snippets of code that you will use twenty years from now when the circumstance is right. That may seem goofy, but it's not.

Finally:

I believed it was important to be funny now, while the audience was watching, but it was also important to be funny later, when the audience was home and thinking about it.

As a teacher, I would like for what my students see, hear, and do in class today to make an impression today. That is what makes the material memorable and, besides, it's a lot more fun for both of us that way than the alternative. But perhaps more important, I would like for the ideas to make a lasting impression. When the student is home thinking about the course, or an assignment, or computer science in general, I want them to realize how much depth the idea or technique has, or breadth. Today's memorability can be ephemeral. When the idea sticks later, it can affect how the student programs forever.

The hard part is trusting when I don't see students getting it in real-time. Martin says that he learned not to worry if a gag got no response from his audience, "as long as I believed it had enough strangeness to linger". Strangeness may not be what I hope for in my class sessions, but I know what he means.

As a programmer, I think this advice applies, too. When I was an undergrad, I was once on a team building a large system as part of our senior project course. One of our teammates loved to write code that was clever, that would catch our eye on first read and recognize his creativity and skill. But we soon learned that much of this code was inscrutable in a day or two, which made modifying and maintaining his modules impossible. Design and code that makes a module easy to read and understand in a few weeks are what the team needed. Sometimes, the cleverness of a solution shone through then, too, but it impressed us all the more when it had "staying power".

Steve Martin is wacky comedian, not a programmer or teacher per se. Does his advice apply only in that context? I don't think so. Comedians are writers and creators, and many of the traits that make them successful apply to other tasks that require creating and communicating.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 01, 2008 8:57 AM

An Unexpected Connection

I wasn't expecting to hear John Maeda's name during the What is a Tree? talk, because I didn't know that researchers in Maeda's lab had created the language Processing. But hearing his name brought to mind something that has been in the back of my mind for a couple of months, since the close of my first theater experience. I had blogged about a few observations my mind had made about the processes of acting in and directing a play. The former were mostly introspective, and the latter were mostly external, as I watched our director coalesce what seemed like a mess into a half-way decent show. Some of these connections involved similarities I noticed between producing a play and creating software.

I made notes of a few more ideas that I hadn't mentioned yet, including:

  • "Release time" is chaos. Even with all of the practice and rehearsal, the hours before the show on opening night were a hectic time, worrisome for a few and distracting for others.
  • You hope for perfection, but there will mistakes. Just do it.
  • No matter what happens on stage or off, when you are on stage, you must stay in character. You are not yourself playing a character; you are the character.
  • As a novice player, I struggled throughout, even to the last call of the final show, with self-consciousness on stage. I think that unself-consciousness -- detachment -- is a skill that can be developed with practice. I need more.

I'm still wondering if those last two have any useful analogue in software development...

Since the show ended, I have occasionally tried to discern the value in the analogy between producing a play and creating software -- indeed, if there is any. That's where the connection to Maeda comes in. Last summer, I read the slender Laws of Simplicity, a collection of essays from Maeda's blog of the same name. The book suggest ten ways that we can design simpler systems and products. I must not have been in the right place to read the book just then, because I didn't get as much out of it as I had hoped. But one part of the book stuck with me.

For a metaphor to engage us deeply, Maeda wrote, it is essential that it relate, translate, and surprise. As I recall now, this means that the metaphor must relate the elements of the two things, that it must translate foreign elements from one of the things to the other, and that the result of this translation should surprise -- it should make us see or understand the other thing in a new way, give us insight.

There is a danger in finding analogies everywhere we look by making superficial connections. I am perhaps more prone to this risk than many other folks. That may be why I liked Maeda's relate/translate/surprise triad so much. Since reading it, I have used it as an external checkpoint for any new analogy that I want to make. If I can explain how the metaphor relates the two things, translates disparate elements, and surprises me, then I have some reason to think that the metaphor offers value -- at least more reason than just saying, "Hey, look at this cool new thing I noticed!"

To this point, I have not found the "surprise" in the theater experience that teaches me something new about how to think about making software. This doesn't mean that there is no value in the analogy, only that I haven't found it yet. By remaining skeptical a little while longer, I decrease the probability that I try to draw an inappropriate conclusion from the relationship.

Of course, just because I haven't yet found the surprise in the analogy doesn't mean that I did not find value in the experience that led me to it. A rich web of experiences is valuable in its own right, and enjoyable. It also provides the source material for learning.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Software Development, Teaching and Learning

January 29, 2008 4:20 PM

A Broken Record?

Last night I attended a meeting with several CS colleagues, colleagues from Physics and Industrial Technology, representatives from the local company DISTek Integration, and representatives from National Instruments, which produces the programming environment LabVIEW. Part of the meeting was about LabVIEW and how students and faculty at my school use it. Our CS department does not use it at all, which is typical; the physicists and technologists use it to create apps for data acquisition and machine control.

Most of the meeting was much more interesting than tool talk and sales pitch, because it was about how to excite more kids and university students about IT careers and programming, and about how LabVIEW users work. Most of the folks who program in LabVIEW at DISTek are test engineers, not trained software developers. But the programming environment makes it possible for them to build surprisingly complex applications. As they grow their apps, and attack bigger problems because they can, the programs become unwieldy and become increasingly difficult to maintain and extend.

It turns out that program design matters. It turns out that software architecture matters. But these programmers aren't trained in the writing software, and so they do much of their work with little or no understanding of how to structure their code, or refactor it into something more manageable.

These folks are engineers, not scientists, but I felt a deep sense of deja vu. Our conversations sounded a lot like what we discussed with physicists at the SECANT workshop to which I keep referring.

I think we have a great opportunity to work with the folks at DISTek to help their domain-specific programmers learn the software development skills they need to become more effective programmers. Working with them will require us to think about how to teach a focused software development curriculum for non-programmers. I think that this will be work worth doing as we encounter more and more folks, in more and diverse domains, who need that sort of software development education -- not a major or maybe even minor in CS -- in order to do what they do.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 16, 2008 12:02 PM

An Open-Source Repository for Course Projects

I don't write many entries for the purpose of passing on a link, but I am making an exception for the Repository for Open Software Education (ROSE), being hosted by a group at North Carolina State University that includes Laurie Williams. You know I am a big fan of project-based courses, and one of the major impediments to faculty teaching courses this way is creating suitable projects. For example, every time I teach compilers, I face the task of choosing or creating a programming language for students to use as their source. I like to use a language that is at least a little different than languages I've used in the past, for a couple of reasons, and that is hard work.

For more generic project courses in software engineering, the problem is compounded by the fact that you may want your students to work on an existing system, so that they can encounter issues raised by a larger system than they might write from scratch themselves. But where will such large software come from? Sites like SourceForge offer plenty of systems, but they come at so many levels of completeness, complexity, documentation, and accessibility. Finding projects suitable for a one-semester undergraduate course in the morass is daunting.

ROSE aims to host open-source projects that are of suitable scope and can grow slowly as different project teams extend them. Being targeted at educators, the repository aims to record information about requirements, design, and testing that are often missing or inscrutable in other open-source projects. At this point, ROSE contains only one project larger than 5K lines of code, but that will change as others contribute their projects to the repository.

As Laurie noted in her announcement of ROSE, for a few years now the CS education community has had a Nifty Assignments repository, which offers instructors a set of fun, challenging, and battle-tested programming assignments to use in lower-division courses. ROSE will host larger open-source projects for use in a variety of ways. It is a welcome addition.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 14, 2008 2:25 PM

Planning and the Project Course

After "worrying out loud" in a recent entry, I suppose I should report that our faculty retreat went well. We talked about several different across over the course of the day, especially long-term goals, outcomes assessment, and tactics for the next few months. The biggest part of our goals discussion related to broadening our reach to more students who are not and may never want to be CS majors. That includes science majors but also social science, business, humanities, and education students.

While discussing how to determine whether or not our courses and programs of study were accomplishing what we want them to accomplish, we had what was in many ways the most interesting discussion of the day. It dealt with our "capstone" project course.

In all of the CS majors we offer, each student is required to complete a "project course" in one of the areas of study we offer. This will be the third course the student takes in that area, which is one way that we try to require students to gain some depth in at least one area of computing. When we added this requirement to our programs, the primary goal was to give students a chance to work in teams on a large piece of software, something bigger than they'd worked on in previous courses and bigger than one person could complete alone.

Some of our project courses are "content-heavy", in that they introduce new topical content while students are working on the project. The compilers course is an example of such a course; "Real-Time Embedded Systems" is another. Others do not introduce much new content and focus on the team project experience. "Projects in Systems" (for the OS/networking crowd) and "Projects in Information Science" (for the database/IR crowd) are examples. Over the years, as we've broadened our faculty and the areas we teach, we've added new project courses to the broaden the type of experience offered as well. In some of these courses, code is not the primary deliverable. For example, in "Software Testing", students might develop a testing environment, a suite of test plans, and documentation; in "User Interface Design", students focus on developing a front end to an existing system or to a lighter proof-of-concept back end that they write. Some of these courses implement looser versions of the original idea in other ways, too. My compiler students usually work in pairs or threes, rather than the four or five that we imagined we designed this part of the curriculum over a decade ago.

Our outcomes assessment discussion turned quickly to the nature of a project course and in particular the diversity we now offer under this banner. We can't write outcomes for the project requirement that refer to code as a primary deliverable if, in fact, several courses do not require that of students. Well, we could, but then we would have to change how we teach the courses -- perhaps drastically. The question was, is code an essential outcome of this part of our curriculum?

We faced similar questions as we explored the other elements of diversity. How much new content should a project introduce? Must students write prose, in the form of documentation, etc.? Should they give oral presentations? If so, should these be public, or is an internal presentation to the class or instructor sufficient? What about the structured demos that our interface design students give as a part of an end-of-the-term open house?

I enjoyed listening to all of my colleagues describe their courses in more detail than I had heard in a while. After our discussion, we decided for the most part to preserve the rich ecology of our project course landscape. Test plans and UIs are okay in place of code in certain courses; the essential outcome is that students be expected to produce multiple deliverables across the life of the project. We also found some common themes that we could agree on, even if it meant tweaking our courses. For example, whatever kind of presentation our students give at the end of the project, it should be open to the public and allow public questioning. We will be writing these outcomes up more formally as we proceed this semester and use them as we evaluate the efficacy of our curriculum.

I was impressed with the diversity of ideas in practice in our project courses. Perhaps the results of this discussion were interesting mostly because we have had this sort of conversation in a long time, at least not as a whole faculty. It's funny, but "shared knowledge" isn't always what we think it is. Sometimes we don't share as much as we think we do, at least of the surface. When we dig deeper, we can find common themes and also fundamental differences, which we can then explore explicitly and in the context of the shared knowledge. It was neat to see how each of us learned a little something new about the goals of the project course and left the conversation with an idea for improving our own courses. My compiler students certainly don't write enough technical prose in my course, certainly not as much or as structured as students in some of the other project courses. I can make my course stronger, and more consistent with the other options, by changing how I teach the course next time, and what I require of the class.

Our retreat didn't answer all my questions about the direction of the department or our strategic and tactical plans. Only in the fantasy of a manager could it! My job now is to take what we learned and decided about ourselves that day, help craft from that a coherent plan of action for the department, and begin to take concrete actions to move us in the direction we want to go. I hope that, as in most other endeavors, we will do some things, learn from the feedback, and adjust course as we go.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

January 09, 2008 4:31 PM

Follow Up to Recent Entry on Intro Courses

After writing my recent entry on some ideas about CS intro courses gleaned from a few of the columns and editorials in the latest edition of inroads, I had a few stray thoughts that seem worth expressing.

On Debate

I always feel a little uneasy when I write a piece like that. It comments specifically on an article written by a CS colleague, and in particular criticizes some of the ideas expressed in the article. Even when I include disclaimers to the effect that I really like an article, or respect the author's work, there is a chance that the author will take offense. That happens in the case even of more substantial written works, and I think that the looser, more "thinking out loud" nature of a blog only compounds the chance of a misunderstanding.

I certainly mean no offense when I write such pieces. Almost always, I am grateful for the author having triggered me to think and write something down. And always I intend for my articles to play a part in a healthy academic debate on an issue that both the author and I think is important.

Without healthy discussion of what we do -- and sometimes even contentious debate -- how can we hope to know whether we are on a good path? Or get onto a good path in the first place? Debate, or the friendlier "discussion" is one of the best ways we have for getting feedback on our thinking and for improving our ideas.

I like healthy discussion, even contentious debate, because I don't mean or take personal offense. I've always liked a good hearty discussion, but still I am indebted to my graduate advisor for modeling how academics approach ideas. In meetings of our research group and in seminars, he was often like a bulldog, challenging claims and offering counterarguments. At times, he could be contentious, and often the room became a lot hotter as he, I, and a couple of likewise combative fellow grad students went at it.

But those discussions were always about ideas. When the meeting was up, the discussion was over, and we all went back to being friends and colleagues. My advisor never held a grudge, even if I spent an hour telling him he was wrong, wrong, wrong. He enjoyed the debate, and knew that we all were getting better from the discussion.

Sometimes we have to work not to take personal offense, and to express our thoughts in a way that won't cause offense to a reasonable person. Sometimes, we even have to adjust our approach to account for the people in the room. But the work is worth it, and debate is necessary.

On Exciting Bright Minds

When deciding how to teach our intro courses, we need to ask several questions. What works best for our student population overall? What helps weakest students learn as much of value as they can? What helps our strongest students see the exciting ideas of computing deeply and come to love the power they afford us?

My dream is that we can offer all of our students, but especially our brightest ones, the sort of excitement that Jon Bentley felt, as he writes in his Communications of the ACM 50th anniversary piece In the realm of insight and creativity. The pages of each issue ignited a young mind that helped to define our discipline. Maybe our courses can do the same.

On Simplicity

In my article, I withheld comment on one of the "language war" elements of Manaris's argument, namely the argument for using a simpler language. In the editorial spirit of his article, I will share my own opinions, based almost wholly in personal preference and anecdote.

I remain a strong believer that OOP can be the foundation of a solid CS curriculum, but Java and certainly C++ are not the best vehicles for learning to program. I agree with Manaris that a conceptually simpler language is preferred. He cites Alan Kay, whom regular readers here know I cite frequently. I agree with Bill and Alan! Then again, Smalltalk or a variant thereof might be an even better choice for CS1 than Python. It has a simpler grammar, no goofy indentation rules no strange __this__ operator, ... but it is quite different from most mainstream languages in feel, style, and environment. Scheme? Can't get much simpler that that. We have learned, though, that simplicity itself can create a different sort of overload difficulty for students while learning. Many folks are successful teaching Scheme early, and I wish we had more evidence on Smalltalk's use in CS 1.

Conceptual simplicity is a good thing, but it is but one of several forces at play in the decision.

For what it's worth, as I mentioned in the same entry, we will be offering a Python-based media computation CS 1 course this semester. I am eager to see how it goes, both on its own and in comparison to the Java-based media comp CS 1 we taught once before.

On Patterns

Finally, Manaris quotes Alan Kay on the non-obvious ideas that can trip up a novice programmer, some which are

... like the concept of the arch in building design: very hard to discover, if you don't already know them.

I do not advocate hanging novices up on non-obvious ideas, but it occurs to me that instruction driven by elementary patterns would address this difficulty head-on. Sometimes, a concept we think obvious is not so to a student, and other times we want students to encounter a non-obvious idea as a part of their growth. Patterns are all about communicating such ideas, in context, with explanation of some of the thinking that underlies their utility.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 08, 2008 5:05 PM

Admin Pushing Teaching to the Side

I have always liked the week before classes start for a new semester. There is a freshness to a new classroom of students, a new group of minds, a new set of lectures and assignments. Of course, most of these aren't really new. Many of my students this semester will have had me for class before, and most semesters I teach a course I've taught before, reusing at least some of the materials and ideas from previous offerings. Yet the combination is fresh, and there is a sense of possibility. I liked this feeling as a student, and I like it as a prof. It is one of the main reasons that I have always preferred a quarter system to a semester system: more new beginnings.

Since becoming department head, the joy is muted somewhat. For one thing, I teach one course instead of three, and instead of taking five. Another is that this first week is full of administrivia. There are graduate assistantship assignments to prepare, lab schedules to produce, last-chance registration sessions to run. Paperwork to be completed. These aren't the sort of tasks that can be easily automated or delegated or shoved aside. So they capture mindshare -- and time.

This week I have had two other admin-related items on my to-do list. First is an all-day faculty retreat my department is having later this week. The faculty actually chose to get together for what is in effect an extended meeting, to discuss the sort of issues that can't be discussed very easily during periodic meetings during the semester, which are both too short for deep discussion and too much dominated by short-term demands and deadlines. As strange as it sounds, I am looking forward to the opportunity to talk with my colleagues about the future of our department and about some concrete next actions we can take to move in the desired direction. There is always a chance that retreats like this can fall flat, and I bear some responsibility in trying to avoid that outcome, but as a group I think we can chart a strong course. One good side effect is that we will go off campus for a day and get away from the same old buildings and rooms that will fill our senses for much of the next sixteen weeks.

Second is the dean's announcement of my third-year review. Department heads here are reviewed periodically, typically every five years. I came into this position after a couple of less-than-ideal experiences for most of the faculty, so I am on a 3-year term. This will be similar to the traditional end-of-the-term student evaluations, only done by faculty of an administrator. In some ways, faculty can be much sharper critics than students. They have a lot of experience and a lot of expectations about how a department should be run. They are less likely to "be polite" out of habits learned as a child. I've been a faculty member and do recall how picky I was at times. And this evaluation will drag out for longer than a few minutes at the end of one class period, so I have many opportunities to take a big risk inadvertently. I'm not likely to pander, though; that's not my style.

I'm not all that worried. The summative part of the evaluation -- the part that judges how well I have done the job I'm assigned to do -- is an essential part of the dean determining whether he would like for me to continue. While it's rarely fun to receive criticism, it's part of life. I care what the faculty think about my performance so far, flawed as we all know it's been. Their feedback will play a large role in my determining whether I would like for me to continue in this capacity. The formative part of the evaluation -- the part that gives me feedback on how I can do my job better -- is actually something I look forward to. Participating in writers' workshops at PLoP long ago helped me to appreciate the value of suggestions for improvement. Sometimes they merely confirm what we already suspect, and that is valuable. Other times they communicate a possible incremental improvement, and that is valuable. At other times still they open doors that we did not even know were available, and that is really valuable.

I just hope that this isn't the sort of finding that comes out of the evaluation. Though I suppose that that would be valuable in its own way!


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Personal, Teaching and Learning

January 07, 2008 7:07 PM

Teaching Compilers by Example

The most recent issue of inroads contained more than just some articles on how to introduce computer science to majors and non-majors alike. It also contained an experience report at the intersection of topics I've discussed recently: a compiler course using a sequence of examples.

José de Oliveira Guimarães describes a course he has taught annually since 2002. In this course, he presents a sequence of ten compilers, each of which introduces a new idea or ideas beyond the previous. The "base case" isn't a full compiler but rather a parser for a Scheme-like expression language. Students see these examples -- full code for compilers that can be executed -- before they learn any theory behind the techniques they embody. All ten are implemented from scratch, using recursive descent for parsing.

If I understand the paper correctly, de Oliveira Guimarães teaches these examples, followed by five examples implemented using a parser generator, in the first eight weeks of a sixteen-week course. The second half of the course teaches the theory that underlies compiler construction. This seems to be a critical feature of a by-example course: The students study examples first and only then learn theory -- but now in the context of the code that they have already seen.

One thing that makes this course different than mine is that students do not implement a compiler during this semester. Instead, they take a second course in which they do a traditional compiler project. I don't have this luxury -- I'm lucky to be able to teach one compiler course at all, let alone a two-semester sequence. I also feel strongly about the power of students working on a major project while learning about compilers that I'm not sure how I feel about the examples-and-theory first course. But what a neat idea! I look forward to studying de Oliveira Guimarães's code in order to understand his course better. (He has all of code on-line, along with the beginnings of a descriptive "textbook" for the course, translated into English.)

In some ways, I suppose that what I do here bears some resemblance to this approach.

In the first semester, students study programming languages using an interpreter-based approach, which gives them a low level of understanding of techniques for processing programs. They see several small program processors and implement a few others, in order to understand these techniques. They then take the compiler course, where we "go deep" both in implementation and theory. But there is no question that their compiler project requires some of the theory we study that semester.

At the beginning of the second semester, before students begin work on their project and before I introduce any theory of lexical or syntax analysis, I spend two sessions presenting a whole compiler. This compiler is my implementation of a variant of a compiler described by Fischer, LeBlanc, and Cytron. I present this example first in order for students to see all of the phases of a compiler working in concert to translate a program. Because it comes so early in the semester, it is necessarily a simple example, but it does give me and the students a reference point for most of what we see later in the term.

This is only one example, but I work incremental refinements to it as we go along. As we learn new techniques for each phase of the compiler, I try to plug a new component into the reference compiler that demonstrates the theory we have just learned. For example, my initial example uses an ad hoc scanner and a recursive descent parser; later, I plug in a regular expression-based scanner and a table-driven parser. Again, language compiled by the reference program is pretty simple, so these new components are still quite simple, but I hope they give the students some grounding before they implement more complex versions in their project.

All that said, de Oliveira Guimarães's approach takes the idea of examples to another level. I look forward to digging into it further and seeing what I can learn from it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 03, 2008 3:25 PM

New Year, Old Topics

I have had a more relaxing break than last year. With no traveling and only an occasional hour or two in my office knocking around, not working, my mind has had a chance to clear out a bit. This is a good way to start the year.

I've even managed not to do much professional reading these past two weeks. About all I've read is a stack of old MacWorld magazines. Every time I or my department buys a Mac, I receive a complimentary 6-month subscription. I like to read about and try out all sorts of software, and this magazine is full of links. But I don't take time to read all of the issues as they roll in, which explains why I still had a couple of issues from late 2005 in my stack! (My favorite new app from this expedition is CocoThumbX.)

New Year's Eve did bring in my mail the latest issue of inroads, the periodic bulletin of SIGCSE. Rather than add it to my stack of things to read, I decided to browse through it while watching college football on TV. I found a few items of value to my work. In the first twenty pages or so, I encountered several short articles that set the stage for ongoing discussion in the new year of issues that have been capturing mind share among CS educators for the last year or so.

(Unfortunately, the latest issue of inroads is not yet available in the ACM Digital Library, so I'll have to add later the links to the articles discussed below.)

First was Bill Manaris's Dropping CS Enrollments: Or The Emperor's New Clothes, which claims that the drop in CS enrollments may well be a result of the switch to C++, Java, and object-oriented programming in introductory courses. He includes a form of disclaimer late in his article by saying that his argument "is about the possibility that there is no absolute best language/paradigm for CS1". This is now an almost standard disclaimer in papers about the choice of language and paradigm for introductory CS courses, perhaps as an attempt to avoid the contention of the debates that swirl around the topic. But the heart of Manaris's article is a claim that what most of us do in CS1 these days is not good, disclaimer notwithstanding. That's okay; ideas are meant to be expressed, examined, and evaluated.

Manaris backs his position from an interesting angle: the use of usability to judge programming languages. He quotes one of Jakob Nielsen's papers:

On the Web, usability is a necessary condition for survival. If a website is difficult to use, people leave. If the homepage fails to clearly state what a company offers and what users can do on the site, people leave. If users get lost on a website, they leave. If a website's information is hard to read or doesn't answer users' key questions, they leave. Note a pattern here? There's no such thing as a user reading a website manual or otherwise spending much time trying to figure out an interface. There are plenty of other websites available; leaving is the first line of defense when users encounter a difficulty.

Perhaps usability is a necessary condition for CS1 to retain students? Maybe students approach their courses and majors in the same way? They certainly have many choices, and if our first course raises unexpected difficulties, maybe students will exercise their choices.

Manaris moves quickly from this notion to the idea that we should consider the usability of the programming language we adopt for our intro courses. The idea of studying how novices learn a language and using that knowledge to inform language design and choice isn't new. The psychology of programming folks have been asking these questions for many years, and Brad Myers's research group at CMU has published widely in this area. But Manaris is right that not enough folks take this kind of research into account when we think about CS1-2. And when we do discuss the ideas of usability and learnability, we usually rely on our own biases and interests as "evidence".

Relying more on usability studies of programming languages would be a good thing. But they need to be real studies, not just more of the same old "here's why I think my favorite language (or paradigm) is best..." Unfortunately, both of the examples Manaris gives in his article are the sort we see too often in SIGCSE circles: anecdotal reports of personal experiences, which are unavoidably biased toward one person's knowledge and preferences. In one, he tells of his experience coding some task in languages A and B and reports that he had "9 compiler errors, one semantic error, and one 'headache' error" in A. (Hmm, could A be Java?) But I wonder if even some of my CS1 students would encounter these same difficulties; the good ones can be pretty good.

In the other, a single evaluator collects data from his own experiences solving a simple task in several different programming environments. I believe in reporting quantitative results, but quantitative data from one individual's experience is of limited value. How many introductory CS1 students would feel the same? Or intro instructors? We are the ones who usually make claims about languages and paradigms based almost solely on their experiences and the experiences of like-minded colleagues. (Guess what? The folks I meet with at the OOPSLA educators' symposium find OOP to be a great way to teach CS1. Shocking!)

And we always need to keep in mind the difference between essential and gratuitous complexity. I am often reminded of Alan Kay's story about learning to play violin. Of course it's hard to learn. But the payoff is huge.

To be fair, Manaris is writing an editorial, not a research paper, so his examples can be taken as hints, not exact recommendations. When conducting usability studies of languages, we need to be sure to seek answers to several different questions. What works best for the general population of students? What helps the weakest students learn as much of value as they can? What helps the strongest students come to see and appreciate -- even love -- the deep ideas and power of our discipline?

Manaris closes his paper with an interesting claim:

In the minds of our beginning students, the programming language/paradigm we expose them to in CS1 is computer science.

A good CS education should help students overcome this limited mindset as soon as possible, but for students who are living through an introduction to computing, this limitation is reality. Most importantly, for students who never go beyond CS1 -- and this includes students who might have gone on but who have an unsatisfying experience and leave -- it is a reality that defines our discipline for much of the rest of the world.

This idea leads nicely into Henry Walker's column, What Image Do CS1/CS2 Present to Our Students? a few pages later in the issue. Walker contrasts the excitement many practitioners and instructors express about computing with the reality of most introductory courses for our students, which are often inward-looking and duller than they should be. We define our discipline for students in the paradigm and language we teach them, but also in many other ways: in the approach to programming we model, in the kinds of assignments we give, and in the kinds of exam questions we ask. We also define it in the ideas we expose them to. Is computing "just programming"? If that's all we show students, then for them it may well be. What ideas do we talk about in class? What activities do we ask students to do? How much creativity do we allow them?

CS1 can't be and do everything, but it should be something more than just a programming language and an arcane set of rules for formatting and commenting code.

Finally, this idea leads naturally into Owen Astrachan's Head in the Clouds only two pages later. In the last few years, Owen has become an evangelist for the position that computing education has to be grounded in real problems -- not the Towers of Hanoi, but problems that real people (not computer scientists) solve in the world as a part of doing their jobs and living their lives. This is a way to get out of the inward-looking mindset that dominates many of our intro courses -- by designing them to look outward at the world of problems just waiting to be attacked in a computational way.

Owen also has been railing against the level of discourse in which CS educators partake, the sort of discourse that ask "What language should I use in CS1?" rather than "How can I help my students using computing as a tool to solve problems?" In that sense, he may not have much interest in Manaris's article, though he may appreciate the fact the article seeks to put our focus on how the students who take our courses learn a language, rather than on the language itself.

I think that we should still start from courses designed in a meaningful context. There is a lot of power in context, both for course design and for motivation. Besides, working from a problem-driven focus gives us an interesting opportunity for evaluating the effects of features such as a language's usability. Consider Mark Guzdial's media computation approach. His team has developed course materials for complete CS1 courses in two languages, Python and Java (which are, not coincidentally, two of the languages that Manaris discusses in his article). These materials have been developed by the creators of the approach, with equal care and concern for the success of students using the approach. Python is in many ways the simpler language, so it will be interesting to see whether students find one version of the course more attractive than the other. I have taught media comp in Java, as has a colleague, and this semester he will teach it using Python. While preparing for the semester, he has already commented on some of the ways in which Python "gets out of the way" and lets the class get down to the business of computing with images and sounds sooner. But that is early anecdote; what will students think and learn?

I hope that 2008 finds the CS education community asking the right questions and moving toward answers.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 21, 2007 4:05 PM

A Panoply of Languages

As we wind down into Christmas break, I have been thinking about a lot of different languages.

First, this has been a week of anniversaries. We celebrated the 20th anniversary of Perl (well, as much as that is a cause for celebration). And BBC World ran a program celebrating the 50th birthday of Fortran, in the same year that we lost John Backus, its creator.

Second, I am looking forward to my spring teaching assignment. Rather than teaching one 3-credit course, I will be teaching three 1-credit courses. Our 810:151 course introduces our upper-division students to languages that they may not encounter in their other courses. In some departments, students see a smorgasbord of languages in the Programming Languages course, but we use an interpreter-building approach to teach language principles in that course. (We do expose them in that course to functional programming in Scheme, but more deeply.)

I like what that way of teaching Programming Languages, but I also miss the experience of exposing students to the beauty of several different languages. In the spring, I'll be teaching five-week modules on Unix shell programming, PHP, and Ruby. Shell scripting and especially Ruby are favorites of mine, but I've never had a chance to teach them. PHP was thrown in because we thought students interested in web development might be interested. These courses will be built around small and medium-sized projects that explore the power and shortcomings of each language. This will be fun for me!

As a result, I've been looking through a lot of books, both to recommend to students and to find good examples. I even did something I don't do often enough... I bought a book, Brian Marick's Everyday Scripting with Ruby. Publishers send exam copies to instructors who use a text in a course, and I'm sent many, many others to examine for course adoption. In this case, though, I really wanted the book for myself, irrespective of using it in a course, so I decided to support the author and publisher with my own dollars.

Steve Yegge got me to thinking about languages, too, in one of his recent articles. The article is about the pitfalls of large code bases but, while I may have something to say about that topic in the future, what jumped out to me while reading this week were two passages on programming languages. One mentioned Ruby:

Java programmers often wonder why Martin Fowler "left" Java to go to Ruby. Although I don't know Martin, I think it's safe to speculate that "something bad" happened to him while using Java. Amusingly (for everyone except perhaps Martin himself), I think that his "something bad" may well have been the act of creating the book Refactoring, which showed Java programmers how to make their closets bigger and more organized, while showing Martin that he really wanted more stuff in a nice, comfortable, closet-sized closet.

For all I know, Yegge's speculation is spot on, but I think it's safe to speculate that he is one of the better fantasy writers in the technical world these days. His fantasies usually have a point worth thinking about, though, even when they are wrong.

This is actually the best piece of language advice in the article, taken at its most general level and not a slam at Java in particular:

But you should take anything a "Java programmer" tells you with a hefty grain of salt, because an "X programmer", for any value of X, is a weak player. You have to cross-train to be a decent athlete these days. Programmers need to be fluent in multiple languages with fundamentally different "character" before they can make truly informed design decisions.

We tell our students that all the time, and it's one of the reasons I'm looking forward to three 5-week courses in the spring. I get to help a small group of our undergrads crosstrain, to stretch their language and project muscles in new directions. That one of the courses helps them to master a basic tool and another exposes then to one of the more expressive and powerful languages in current use is almost a bonus for me.

Finally, I'm feeling the itch -- sometimes felt as a need, other times purely as desire -- to upgrade the tool I use to do my blogging. Should I really upgrade, to a newer version of my current software? (v3.3 >> v2.8...) Should I hack my own upgrade? (It is 100% shell script...) Should I roll my own, just for the fun of it? (Ruby would be perfect...) Language choices abound.

Merry Christmas!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

December 20, 2007 1:45 PM

An Unexpected Christmas Gift

Some students do listen to what we say in class.

Back when I taught Artificial Intelligence every year, I used to relate a story from Russell and Norvig when talking about the role knowledge plays in how an agent can learn. Here is the quote that was my inspiration, from Pages 687-688 of their 2nd edition:

Sometimes one leaps to general conclusions after only one observation. Gary Larson once drew a cartoon in which a bespectacled caveman, Zog, is roasting his lizard on the end of a pointed stick. He is watched by an amazed crowd of his less intellectual contemporaries, who have been using their bare hands to hold their victuals over the fire. This enlightening experience is enough to convince the watchers of a general principle of painless cooking.

I continued to use this story long after I had moved on from this textbook, because it is a wonderful example of explanation-based learning.

Unfortunately, Russell and Norvig did not include the cartoon, and I couldn't find it anywhere. So I just told the story and then said to the class -- every class of AI students to go through my university over a ten-year stretch -- that I hoped to find it some day.

As of yesterday, I can, thanks to a former student. Ryan heard me on that day in his AI course and never forgot. He looked for that cartoon in many of the ways I have over the years, by googling and by thumbing through Gary Larson collections in the book stores. Not too long ago, he found it via a mix of the two methods and tracked it down in print. Yesterday, on one of his annual visits (he's a local), he brought me a gift-wrapped copy. And I was happy!

Sadly, I still can't show you or any of my former students who read my blog. Sorry. I once posted another Gary Larson cartoon in a blog entry, with a link to the author's web site, only to eventually a pre-cease-and-desist e-mail asking me to pull the cartoon from the entry. I'll not play with that fire again. This is almost another illustration of the playful message of the very cartoon in question: learning not to stick one's hand into the flame from a single example. But not quite -- it's really an example of learning from negative feedback.

Thanks to Ryan nonetheless, for remembering an old prof's story from many years ago and for thinking of him during this Christmas season! Both the book and the remembering make excellent gifts.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

December 12, 2007 1:02 PM

Not Your Father's Data Set

When I became department head, I started to receive a number of periodicals unrequested. One is Information Week. I never read it closely, but I usually browse at least the table of contents and perhaps a few of the news items and articles. (I'm an Apple/Mac/Steve Jobs junkie, if nothing else.)

The cover story of the December 10 issue is on the magazine's 2007 CIO of the Year, Tim Stanley of Harrah's Entertainment. This was an interesting look into a business for which IT is an integral component. One particular line grabbed my attention, in a sidebar labeled "Data Churn":

Our data warehouse is somewhat on the order of 20 Tbytes in terms of structure and architecture, but the level of turn of that information is many, many, many times that each day.

The database contains information on 42 million customers, and it turns data over multiple tens of terabytes a day.

Somehow, teaching our students to work with data sets of 10 or 50 or 100 integers or strings seems positively 1960s. It also doesn't seem to be all that motivating an approach for students carrying iPods with megapixel images and gigabytes of audio and video.

An application with 20 terabytes of data churning many times over each day could serve as a touchstone for almost an entire CS curriculum, from programming and data structures to architecture and programming languages, databases and theory. As students learn how to handle larger problems, they see how much more they need to learn in order to solve common IT problems of the day.

sample graph of open data set, of Chinese food consumption

I'm not saying that we must use something on the order of Harrah's IT problem to do a good job or to motivate students, but we do need to meet our students -- and potential students -- closer to where they live technologically. And today we have so many opportunities -- Google, Google Maps, Amazon, Flickr, ... These APIs are plenty accessible to more advanced students. They might need a layer of encapsulation to be suitable for beginning students; that's something a couple of us have worked on occasionally at ChiliPLoP. But we all have even more options available these days, as data-sharing sites a lá Flickr become more common. (See, for example, Swivel; that's where I found the graph shown above, derived from data available at the USDA's Economic Research Service website.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 11, 2007 1:46 PM

Thoughts While Killing Time with the Same Old Ills

I have to admit to not being very productive this morning, just doing some reading and relaxing. There are a few essential tasks to do yet this finals week, some administrative (e.g., making some adjustments to our teaching schedule based on course enrollments) and some teaching (e.g., grading compiler projects and writing a few benchmark programs). But finals week is a time when I can usually afford half a day to let my mind float aimlessly in the doldrums.

Of course, by "making some adjustments to our teaching schedule based on course enrollments", I mean canceling a couple of classes due to low enrollments and being sure that our faculty have other teaching or scholarly assignments for the spring. The theme of low enrollments is ongoing even as we saw a nontrivial bump in number of majors last fall. Even if a trend develops in that direction, we have to deal with smaller upper-division enrollments from small incoming classes of the recent past.

Last month, David Chisnall posted an article called Is Computer Science Dying?, which speculates on the decline in majors, the differences between CS and software development, and the cultural change that has turned prospective students' interests to other pursuits. There isn't anything all that new in the article, but it's a thoughtful piece of the sort that is, sadly, all too common these days. At least he is hopeful about the long-term viability of CS as an academic discipline, though he doesn't have much to say about how CS and its applied professional component might develop together -- or apart.

In that regard, I like several of Shriram Krishnamurthi's theses on undergraduate CS. When he speaks of the future -- from a vantage point of two years ago -- he recommends that CS programs admit enough flexibility that they can evolve in ways that we see make sense.

(I also like his suggestion that we underspecify the problems we set before students:

Whether students proceed to industrial positions or to graduate programs, they will have to deal with a world of ambiguous, inconsistent and flawed demands. Often the difficulty is in formulating the problem, not in solving it. Make your assignments less clear than they could be. Do you phrase your problems as tasks or as questions?

This is one of the ways student learn best from big projects!)

Shriram also mentions forging connections between CS and the social sciences and the arts. One group of folks who is doing that is Mark Guzdial's group at Georgia Tech, with their media computation approach to teaching computer science. This approach has been used enough at enough different schools that Mark now has some data on how well it might help to reverse the decline in CS enrollments, especially among women and other underrepresented groups. As great as the approach is, the initial evidence is not encouraging: "We haven't really changed students' attitudes about computer science as a field." Even students who find that they love to design solutions and implement their designs in programs retain the belief that CS is boring. Students who start a CS course with a favorable attitude toward computing leave the university with a favorable attitude; those who start with an unfavorable attitude leave with the same.

Sigh. Granted, media computation aims at the toughest audience to "sell", folks most likely who consider themselves non-technical. But it's still sad to think we haven't made headway at least in helping them to see the beauty and intellectual import of computing. Mark's not giving up -- on computing for all, or on programming as a fundamental activity -- and neither am I. With this project, the many CPATH projects, and Owen Astrachan's Problem Based Learning in Computer Science project, and so many others, I think we will make headway. And I think that many of the ideas we are now pursuing, such as domain-specific applications, problems, and projects, is a right path.

Some of us here think that the media computation approach is one path worth pursue, so we are offering a CS1/CS2 track in media computation beginning next semester. This will be our third time, with the first two being in the Java version. (My course materials, from our first media comp effort, are still available on-line. This time, we are offering the course in Python -- a decision we made before I ended up hearing so much about the language at the SECANT workshop I reported on last month. I'm an old Smalltalk guy, and a fan of Scheme and Lisp, who likes the feel of a small, uniform language. We have been discussing the idea of using a so-called scripting language in CS1 for a long time, at least as one path into our major, and the opportunity is right. We'll see how it goes...

The secret to happiness is low expectations.
-- Barry Schwartz

In addition to reading this morning, I also watched a short video of Barry Schwartz from TED a couple of years ago. I don't know how I stumbled upon a link to such an old talk, but I'm glad I did. The talk was short, colorful, and a succinct summary of the ideas from Schwartz's oft-cited The Paradox of Choice. Somehow, his line about low expectations seemed a nice punctuation mark to much of what I was thinking about CS this morning. I don't feel negative when I think this, just sobered by the challenge we face.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 07, 2007 4:41 PM

At the End of Week n

Classes are over. Next week, we do the semiannual ritual of finals week, which keeps many students on edge while at the same time releasing most of the tension in faculty. The tension for my compiler students will soon end, as the submission deadline is 39 minutes away as I type this sentence.

The compiler course has been a success several ways, especially in the most important: students succeeded in writing a compiler. Two teams submitted their completed programs earlier this week -- early! -- and a couple of others have completed the project since. These compilers work from beginning to end, generating assembly language code that runs on a simple simulated machine. Some of the language design decisions contributed to this level of success, so I feel good. (And I already know several ways to do better next time!)

I've actually wasted far too much time this week writing programs in our toy functional language, just because I enjoy watching them run under the power of my students' compilers.

More unthinkable: There is a greater-than-0% chance that at least one team will implement tail call optimization before our final exam period next. They don't have an exam to study for in my course -- the project is the purpose we are together -- so maybe...

In lieu of an exam, we will debrief the project -- nothing as formal as a retrospective, but an opportunity to demo programs, discuss their design, and talk a bit about the experience of writing such a large, non-trivial program. I have never found or made the time to do this sort of studio work during the semester in the compilers course, as I have in my other senior project courses. This is perhaps another way for me to improve this course next time around.

The end of week n is a good place to be. This weekend holds a few non-academic challenges for me: a snowy 5K with little hope for the planned PR and my first performances in the theater. Tonight is opening night... which feels as much like a scary final exam as anything I've done in a long time. My students may have a small smile in their hearts just now.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

November 29, 2007 4:38 PM

Coincidence by Metaphor

I recently wrote that I will be in a play this Christmas season. I'm also excited to have been asked by Brian Marick to serve on a committee for the Agile2008 conference, which takes me in another direction with the performance metaphor. As much as I write about agile methods in my software development entries, I have never been to one of the agile conferences. Well, at least not since they split off from OOPSLA and took on their own identity.

Rather than using the traditional and rather tired notion of "tracks", Agile2008 is organized around the idea of stages, using the metaphor of a music festival. Each stage is designed and organized by a stage producer, a passionate expert, to attract an audience with some common interest. In the track-based metaphor -- a railroad? -- such stages are imposed over the tracks in much the way that aspects cut across the classes in an object-oriented program.

(At a big conference like OOPSLA, defining and promoting the crosscutting themes is always a tough chore. Now that I have made this analogy, I wonder if we could use ideas from aspect-oriented programming to do the job better? Think, think.)

I look forward to seeing how this new metaphor plays out. In many important ways, the typical conference tracks (technical papers, tutorials, workshops, etc.) are implementation details that help the conference organizers do their job but that interfere with the conference participants' ability to access events that interest them. Why not turn the process inside out and focus on our users for a change? Good question!

(Stray connection: This reminds of an interview I heard recently with comedian Steve Martin, who has written a biography of himself. He described how he developed his own style as a stand-up comedian. Most comedians in that era were driven by the punch line -- tell a joke that gets you to a punch line, and then move on to the next. While taking a philosophy course, he learned that one should question even the things that no one ever questioned, what was taken for granted. What would be comedy be like without a punch line? Good question!)

Of course, this changes how the conference organizers work. First of all, it seems that for a given stage the form of activity being proposed could be almost anything: a presentation, a small workshop, a demonstration, a longer tutorial, a roundtable, ... That could be fun for the producers and their minions, and give them some much needed flexibility that is often missing. (Several times in the past I have had to be part of rejecting, say, a tutorial when we might gladly have accepted the submission if reformulated as a workshop -- but we were the tutorials committee!)

Brian tells me that Agile 2008 will try a different sort of submission/review/acceptance process. Submissions will be posted on the open web, and anyone will be able to comment on them. The review period will last several months, during which time submitters can revise their submissions. If the producer and the assistant producers participate actively in reviewing submissions over the whole period, they well put in more work than in a traditional review process (and certainly over a longer period of time.) But the result should be better submissions, shaped by ideas from all comers -- including potential members of the audience that the stage hopes to attract! -- and so better events and a better conference. It will be cool to be part of this experiment.

the Agile 2008 Examples stage logo

As you can see from the Agile 2008 web site, its stages correspond to themes, not event formats. Brian is producing a stage called "Designing, Testing, and Thinking with Examples", the logo for which you see here. This is an interesting theme that goes beyond particular behaviors such as designing, testing, and teaching to the heart of a way to think about all of them, in terms of concrete examples. The stage will not only accept examples for presentation, demonstration, or discussion, but glory in them. That word conveys the passion that Brian and his assistant producer Adam Geras bring to this theme.

I think Brian asked me to help them select the acts for this stage because I have exhibited some strong opinions about the role of examples and problems in teaching students to program and to learn the rest of computer science. I'm pretty laid back by nature, and so don't often think of myself in terms of passion, but I guess I do share some of the passion for examples that Brian and Adam bring to the stage. This is a great opportunity for me to broaden my thinking about examples and to see what they means for the role they play in my usual arenas.

In a stroke of wisdom, no one has asked me yet to be on the stage, unlike my local director friend. Whatever practice I get channeling Jackson Davies, I am not sure I will be ready for prime time on the bright lights of an Agile stage...

It occurs to me that, following this metaphor one more step, I am not playing the role of a contestant on, say, American Idol, but the role of talent judge. Shall I play Simon or Paula? (Straight up!)


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 26, 2007 8:05 PM

A Quick Thought on Minimesters

Yes, that is a word used by an accredited institution of higher education.

In my previous entry, I discussed what lecture was good for. One of the best uses of lecture, I concluded, was to motivate students for the work that they would do outside of class. The best use of class time, whether lecture or not, is to support students in the work they do at home and in the lab. That's because learning takes time, and 150 minutes a week in class just isn't enough.

After the wedding I mentioned last time, I took my family out for dinner. While dining, I could not help overhearing some of the conversation in booth behind us. An instructor from the local community college was describing her unabashed love of the minimester. "The students were so focused!"

"Minimester" is a portmanteau that refers to a very short and concentrated term of study. In recent years, our local community college has begun offering eight-day courses in the interim between real semesters. Eight days.

The appeal to students is obvious. They can knock out an entire course in a couple of weeks. To students looking to get out into the workforce quickly, or to transfer to a four-year school as soon as possible, the minimester is a boon.

The students are so focused... They have to be, or the course will be over in the blink of an eye. But do they learn??

Like lecture, this is an idea that may sound good but does not work in practice. It may work great for the professor, who can free up a teaching slot during a traditional semester for other activities. It may work great for the school, which can sell courses to students who might be unable to commit a whole semester to a course. But I do not think that minimesters work for students -- at least not the ones who want to learn something.

The minimester course reminds me many of the one-week OOD/OOP courses offered by consulting groups for professional developers. No one should think that these courses do anything more than introduce students to the topic and prepare them for a lot of work afterwards, learning on their own. Too many managers seem to think that these one-week courses are sufficient on their own.

Stretching the idea in the other direction, there is a private four-year college in our state that teaches all of its courses in terms of three weeks. Its faculty believe that this sort of immersive experience benefits student learning. Three weeks is quite a bit different than 5 or 8 days, but it still seems to be so short...

So I got to thinking, what sort of course can be taught -- learned -- effectively in such a short period? Without much experience teaching these courses, though, my mind quickly turned to the sort of course that cannot be learned effectively in such a short time frame.

Not design. Not creating. Not writing. Not programming. (Just ask Peter Norvig!)

Let's see. Students can retarget existing knowledge in a short course. A student can learn a fourth or tenth programming language in a short time, if she already knows another language like it. Most students can't even approach mastering the new language in a week, but they can be prepared to master the language with practice at home. And if the new language is in a new programming style, all bets are off. A week or two almost certainly isn't enough.

Students can learn some facts in a short period, so courses that are heavy on facts are a possibility. But then, that takes us back to the discussion of lecture. If the course is "just the facts, ma'am", why not just give the student a book to read? The thing is, learning facts is only one of the desired outcomes of even a fact-laden course. It is also the one most easily achieved.

The problem with minimesters is that one cannot very easily learn practices, or any behavior that take time to develop and become habitual. Practice takes, well, practice! And practice takes time.

The one benefit of a three-week immersive experience is that the deep focus it affords allows the learner a chance to get a good start on a new habit. Students who take 5 courses in a traditional 15-week semester often are stretched so thin that they have a hard time creating new habits in any of them, unless they make a concerted effort.

Many of us at the university detest the notion of students transferring credit for one of our courses from a community college if the course was taken during a minimester. But there is not much we can do. Fortunately, the community colleges aren't doing CS courses this way -- at least yet.

Most people know that training the body takes time (though some students hope against hope). We need to respect the same truth about how the mind works.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 25, 2007 10:55 AM

A Quick Thought on Lecture

Yesterday I ran into two thoughts on teaching and learning that may end up being longer pieces later. This entry is me thinking out loud about the first, and next I'll think out loud about the second.

Yesterday I was filling some time before a wedding in a geeky way by reading Carl Wieman's Why Not Try a Scientific Approach to Science Education? from the September/October 2007 issue of Change magazine, on the recommendation of a fellow department head in my college. Wieman won the 2001 Nobel Prize in physics and has also spent a lot of time thinking about teaching physics to general population. The article nicely summarizes many of the ways we can improve of general science education, most of which won't be new to anyone who has read in this area before.

Wieman explains why lecture is not an effective instructional strategy, even in situations where you think it might be (for example, when the audience consists of other experts in the field). The thought that occupied me for much of the afternoon was, okay, so, what is lecture good for? Or should we just retire it from our repertoires entirely?

Two customized forms of lecture can be quite useful. A short lecture -- 10 to 15 minutes max -- can set up another activity. A lecture that includes heavy doses of solving problems can illustrate a technique that students must do, or have done and now have questions about.

Straight lecture is, as Wieman says, a distillation of understanding: "First I thought very hard about the topic and got it clear in my own mind. Then I explained it to my students so that they would understand it with the same clarity I had." This is exposition which is well-suited to be read by the student. This is why so many instructors end up turning their awesome lecture notes into a book. So a third use of lecture is to fine-tune material that will become a book. That may be good for the lecturer/author and even future readers, but it doesn't do as much for the students sitting in the classroom.

I think that the fourth use of lecture is motivation. A great lecture can fire up the troops, rousing their passions to leave class -- and work hard. That's how learning really happens, through hard work outside of class. Time in class can help streamline the process, heading off potential deadends and guiding students down more fruitful paths. What a lecture during in class time can do is to excite students about the material and prepare them for the work they need to do on their own.

So: I don't think we should retire lecture from our repertoires entirely. We should use it in a target, limited fashion to prepare other activities in class and occasionally we should use it as a motivational speaker would, to excite students about what is possible and why the hard work is worth the effort.

I really wish I were a little better at this sort of lecturing. I think my students this semester could use a spiritual revival as they enter the last two weeks writing their compilers.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 20, 2007 4:30 PM

Workshop 5: Wrap-Up

[A transcript of the SECANT 2007 workshop: Table of Contents]

The last bit of the SECANT workshop focused on how to build a community at this intersection of CS and science. The group had a wide-ranging discussion which I won't try to report here. Most of it was pretty routine and would not be of interest to someone who didn't attend. But there were a couple of points that I'll comment on.

On how to cause change.     At one point the discussion turned philosophical, as folks considered more generally how one can create change in a larger community. Should the group try to convince other faculty of the value of these ideas first, and then involve them in the change? Should the group create great materials and courses first and then use them to convince other faculty? In my experience, these don't work all that well. You can attract a few people who are already predisposed to the idea, or who are open to change because they do not have their own ideas to drive into the future. But folks who are predisposed against the idea will remain so, and resist, and folks who are indifferent will be hard to move simply because of inertia. If it ain't broke, don't fix it.

Others expressed these misgivings. Ruth Chabay suggested that perhaps the best way to move the science community toward computational science is by producing students who can use computation effectively. Those students will use computation to solve problems. They will learn deeper. This will catch the eye of other instructors. As a result, these folks will see an opportunity to change how they teach, say, physics. We wouldn't have to push them to change; they would pull change in. Her analogy was to the use of desktop calculators in math, chemistry, and physics classes in the 1970s and 1980s. Such a guerilla approach to change might work, if one could create a computational science course good enough to change students and attractive enough to draw students to take it. This is no small order, but it is probably easier than trying to move a stodgy academic establishment with brute force.

On technology for dissemination.     Man, does the world change fast. Folks talked about Facebook and Twitter as the primary avenues for reaching students. Blogs and wikis were almost an afterthought. Among our students, e-mail is nearly dead, only 20 years or so after it began to enter the undergraduate mainstream. I get older faster than the calendar says because the world is changing faster than the days are passing.

Miscellaneous.     Purdue has a beautiful new computer science building, the sort of building that only a large, research school can have. What we might do with a building at an appropriate scale for our department! An extra treat for me was a chance to visit a student lounge in the building that is named for the parents of a net acquaintance of mine, after he and his extended family made a donation to the building fund. Very cool.

I might trade my department's physical space for Purdue CS's building, but I would not trade my campus for theirs. It's mostly buildings and pavement, with huge amounts of auto traffic in addition to the foot traffic. Our campus is smaller, greener, and prettier. Being large has its ups and its downs.

Thanks to a recommendation of the workshop's local organizer, I was able to enjoy some time running on campus. Within a few minutes I found my way to some trails that head out into more serene places. A nice way to close the two days.

All in all, the workshop was well worth my time. I'll share some of the ideas among my science colleagues at UNI and see what else we can do in our own department.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

November 20, 2007 1:28 PM

Workshop 4: Programming Scientists

[A transcript of the SECANT 2007 workshop: Table of Contents]

Should scientists learn to program? This question arose several times throughout the SECANT workshop, and it was an undercurrent to most everything we talked about.

Most of the discipline-area folks at the workshop currently use programming in their courses. Someone pointed out that this can be an attractive feature in an elective science course that covers computational material -- even students in the science disciplines want to learn a tool or skill they know to be marketable. (I am guessing that at least some of the times it may be the only thing that convinces them to take the course!)

Few require a standard programming course from the CS catalog, or a CS-like course heavy in abstraction. That is usually not the most useful skill for the science students to learn. In practice, the scientists need to learn to write only small procedural programs. They don't really need OOP (which was the topic of a previous talk at the workshop), though they almost certainly will be clients of rich and powerful objects.

Python was a popular choice among attendees and is apparently quite popular as a scripting language in the sciences. The Matter and Interactions curriculum in physics, developed by Chabay and Sherwood, depends intimately on simulations programmed -- by physics students -- in VPython, which provides an IDE and modules for fast array operations and some quite beautiful 3-D visualization. I'm not running VPython yet, because it currently requires X11 on my Mac and I've tried to stay native Mac whenever possible. This package looks like it might be worth the effort.

A scripting language augmented with the right libraries seems like a slam-dunk for programmers in this context. Physics, astronomy, and other science students don't want to learn the overhead of Java or the gotchas of C pointers; they want to solve problems. Maybe CS students would benefit from learning to program this way? We are trying a Python-based version of a media computation CS1 in the spring and should know more about how our majors respond to this approach soon. The Java-based CS1 using media computation that we ran last year went well. In the course that followed, we did observe a few gaps that CS students don't usually have after CS1, so we will need to address those in the data structures course that will follow the Python-based offering. But that was to be expected -- programming for CS students is different than programming for end users. Non-computer scientists almost certainly benefit from a scripting language introduction. If they ever need more, they know where to go...

The next question is, should CS faculty teach the programming course for non-CS students? CS faculty almost always say, "Yes!" Someone at the workshop said that otherwise programming will be "taught BioPerl badly in a biology course by someone who only knows how to hack Perl. Owen Astrachan dared to ask "'taught badly' -- but is it?" The science students learn what they need to solve problems in their lab. CS profs responded, well, but their knowledge will be brittle, and it won't scale, and .... But do the scientists know that -- or even care!? If they ever need to write bigger, less brittle, more scalable programs, they know where to go to learn more.

I happen to believe that science students will benefit by learning to program from computer science professors -- but only if we are willing to develop courses and materials for them. I see no reason to believe that the run-of-the-mill CS1 course at most schools is the right way to introduce non-CS students to programming, and lots of reasons to believe otherwise.

And, yes, I do think that science students should learn how to program, for two reasons. One is that science in the 21st century is science of computation. That was one of the themes of this workshop. The other is that -- deep in my heart -- I think that all students should learn to program. I've written about this before, in terms of Alan Kay's contributions, and I'll write about it again soon. In short, I have at least two reasons for believing this:

  • Computation is a new medium of communication, and one with which we should empower everyone, not just a select few.
  • Computer programming is a singular intellectual achievement, and all educated people should know that, and why.

Big claims to close an entry!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 19, 2007 4:41 PM

Workshop 3: The Next Generation

[A transcript of the SECANT 2007 workshop: Table of Contents]

The highlight for me of the final morning of the SECANT workshop was a session on the "next generation of scientists in the workforce". It consisted of presentations on what scientists are doing out in the world and how computer scientists are helping them.

Chris Hoffman gave a talk on applications of geometric computing in industry. He gave examples from two domains, the parametric design of mechanical systems and the study of the structure and behaviors of proteins. I didn't follow the virus talk very well, but the issue seems to lie in understanding the icosahedral symmetry that characterizes many viruses. The common theme in the two applications is constraint resolution, a standard computational technique. In the design case, the problem is represented as a graph, and graph decomposition is used to create local plans. Arriving at a satisfactory design requires solving a system of constraint equations, locally and globally. A system of constraints is also used to model the features of a virus capsid and then solved to learn how the virus behaves.

Essential to both domains is the notion of a constraint solver. This won't be a surprise to folks who work on design systems, even CAD systems, but the idea that a biologist needs to work with a constraint solver might surprise many. Hoffman's take-home point was that we cannot do our work locked within our disciplinary boundaries, because we don't usually know where the most important connections lie.

The next two talks were by computer scientists working in industry. Both gave wonderful glimpses of how scientists work today and how computer scientists help them -- and are helping them redefine their disciplines.

First was Amar Kumar of Eli Lilly, who drew on his work in bioinformatics with biologists and chemists in drug discovery. He views his primary responsibility as helping scientists interpret their data.

The business model traditionally used by Lilly and other big pharma companies is unsustainable. If Lilly creates 10,000 new compounds, roughly 1,000 will show promise, 100 will be worth testing, and 1 will make it to market. The failure rate, the time required by the process, the cost of development -- all result in an unsustainable model.

Kumar means that Lilly and its scientists must undergo a transformation in thought process about how to find candidates and how to discover effects. He gave two examples. Biologists must move from "How does this particular gene respond to this particular drug?" to "How do all human genes respond to this panel of 35,000 drugs?" There are roughly 30,000 human genes, which means that the new question produces 1 billion data points. Similarly, drug researchers must move from "What does Alzheimer's do to the levels of amyloid protein in the brain? to "When I compare a healthy patient with an Alzheimer's patient, what is difference in the level of every brain-specific protein over time?" Again, the new question produces a massive number of data points.

Drug companies must ask new kinds of questions -- and design new ways to find answers. The new paradigm shifts the power from pure lab biologists to bioinformatics and statistics. This is a major shift in culture at a place like Lilly research labs. It terms of a table of gene/drug interactions, the first adjustment is from cell thinking (1 gene/1 drug) to column thinking (1 drug/all genes). Ultimately, Kumar believes, the next step -- to grid thinking (m drugs/n genes) and finding patterns throughout -- is necessary, too.

What the bioinformatician can do is to help convert information into knowledge. Kumar said that a friend used to ask him what he would do with infinite computational power. He thinks the challenge these days is not to create more computational power. We already have more data in our possession than we know what to do with. More than more raw power, we need new ways to understand the data that we gather. For example, we need to use clustering techniques more effectively to find patterns in the data, to help scientists see the ideas. Scientists do this "in the small", by hand, but programs can do so much better. Kumar showed an example, a huge spreadsheet with a table of genes crossed with metabolites. Rather than look at the data in the small he converted the numbers to a heat map so that the scientist could focus on critical areas of relationship. That is a more fruitful way to identify possible experiments than to work through the rows of the table by hand.

Kumar suggests that future scientists require some essential computational skills:

  • data integration (across data sets)
  • data visualization
  • clustering
  • translation of problems from one space to another
  • databases
  • software development lifecycle

Do CS students learn about clustering as undergrads? Biologists need to. On the last two items, other scientists usually know remarkably little. Knowing a bit about the software lifecycle will help them work better with computer scientists. Knowing a bit about databases will help them understand the technology decisions the CS folks make. If all you know is a flat text file or maybe a spreadsheet, then you may not understand why it is better to put the data in a database -- and how much better that will support your work.

The second speaker was Bob Zigon from Beckman Coulter, a company that works in the area of flow cytometry. Most of us in the room didn't know that flow cytometry studies the properties of cells as they flow through a liquid. Zigon is a software tech lead for the development of flow cytometry tools. He emphasized that to do his job, he has to act like the lab scientists. He has to learn their vocabulary, how to run their equipment, how to build their instruments, and how to perform experiments. The software folks at Beckman Coulter spend a lot of time observing scientists.

... and students chuckle at me when I tell them psychology, anthropology, and sociology make great minors or double majors for CS students! My experience came in the world of knowledge-based systems, which require a deep understanding of the practice and implicit knowledge of domain experts. Back in the early 1990s, I remember AI researcher John McDermott, of R1/XCON fame, describing how his expert systems team had evolved toward cultural anthropology as the natural next step in their work. I think that all software folks must be able to develop a deep cultural understanding of the domains they work in, if they want to do their jobs well. As software development becomes more and more interdisciplinary, it becomes more important. Whether they learn these skills in the trenches or with some formal training is up to them.

Enjoying this sort of work helps a software developer, too. Zigon clearly does. He and his team implement computations and build interfaces to support the scientists who use flow cytometry to study blood cancer and other health conditions. He gave a great two-minute description of one of the basic processes that I can't do justice here. First they put blood into a tube that narrows down to the thickness of a hair. The cells line up, one by one. Then the scientists run the blood across a laser beam, which causes the cells to effloresce. Hardware measures the fluorescent energy, and software digitizes it for analysis. The equipment processes 10k cells/second, resulting in 18 data points for each of anywhere between 1 and 20 million cells.

What do scientists working in this area need? Data management across the full continuum: acquisition, organization, querying, and visualization. Eight years of research data amount to about 15 gigabytes. Eight years of pharmaceutical data reaches 185 GB. And eight years of clinical data is 3 terabytes. Data is king.

Zigon's team moves all the data into relational databases, converting the data into fifth normal form to eliminate as much redundancy as possible. Their software makes the data available to the scientists for online transactional processing and online analytical processing. Even with large data sets and expensive computations, the scientists need query times in the range of 7-10 seconds.

With so much data, the need for ways to visualize data sets and patterns is paramount. In real time, they process 750 MB data sets at 20 frames per second. The biologists would still use histograms and scatter plots as their only graphical representations if the software guys couldn't do better. Zigon and his team build tools for n-dimensional manipulation and review of the data. They also work on data reduction, so that the scientists can focus on subsets when appropriate.

Finally, to help find patterns, they create and implement clustering algorithms. Many of the scientists tend to fall back on k-means clustering, but in highly multidimensional spaces that technique imposes a false structure on the data. They need something better, but the alternatives are O(n²) -- which is, of course, intractable on such large sets. So Zigon needs better algorithms and, contrary to Kumar's case, more computational power! At the top of his wish list are algorithms whose complexity scales to studying 15 million cells at a time and ways to parallelize these algorithms in cost- and resource-effective ways. Cluster computing is attractive -- but expensive, loud, hot, .... They need something better.

What else do scientists need? The requirements are steep. The ability to integrate cellular, proteomic, and genomic data. Usable HCI. On a more pedestrian tack, they need to replace paper lab notebooks with electronic notebooks. That sounds easy but laws on data privacy and process accountability make that a challenging problem, too. Zigon's team draws on work in the areas of electronic signatures, data security on a network, and the like.

From these two talks, it seems clear that domain scientists and computer scientists of the future will need to know more about the other discipline than may have been needed in the past. Computing is redefining the questions that domain scientists must ask and redefining the tasks performed by the CS folks. The domain scientists need to know enough about computer science, especially databases and visualization, to know what is possible. Computer scientists need to study algorithms, parallelism, and HCI. They also need to take more seriously the soft skills of communication and teamwork that we have encouraging for many years now.

The Q-n-A session that followed pointed out an interesting twist on the need for communication. It seems that clustering algorithms are being reinvented across many disciplines. As each discipline encounters the need, the scientists and mathematicians -- and even computer scientists -- working in that area sometimes solve their problems from scratch without reference to the well-developed results on clustering and pattern recognition from CS and math. This seems like a potentially valuable place to initiate dialogue across the sciences in places looking to increase their interdisciplinary focus.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 16, 2007 10:52 AM

Workshop 2: Exception Gnomes, Garbage Collection Fairies, and Problems

[A transcript of the SECANT 2007 workshop: Table of Contents]

Thursday afternoon at the NSF workshop saw a hodgepodge of sessions around the intersection of science ed and computing.

The first session was on computing courses for science majors. Owen Astrachan described some of the courses being taught at Duke, including his course using genomics as the source of problems to learn programming. He entertained us all with pictures of scientists and programmers, in part to demonstrate how many of the people who matter in the domains where real problems live are not computer geeks. The problems that matter in the world are not the ones that tend to excite CS profs...

Unbelievable but true! Not everyone knows about the Towers of Hanoi.

... or cares.

John Zelle described curriculum initiatives at Wartburg College to bring computing to science students. Wartburg has taken several small steps along the way:

  • a more friendly CS1 course
  • an introductory course in computational science
  • integration of computing into the physics curriculum
  • a CS senior project course that collaborates with the sciences
  • (coming) a revision of the calculus sequence

At first, Zelle said a "CS1 course friendlier to scientists", but then he backed up to the more general statement. The idea of needing a friendlier intro course even for our majors is something many of us in CS have been talking about for a while, and something I wrote about a while back. I was also interested in hearing about Wartburg's senior projects. More recently, I wrote about project-based computer science education. Senior project courses are a great idea, and one that CS faculty can buy into at many schools. That makes it a good first step to perhaps changing the whole CS program, if a faculty were so inclined. The success of such project-centered courses is just about the only way to convince some faculty that a project focus is a good idea in most, if not all, CS courses.

Wartburg's computational science course covers many of the traditional topics,including modeling, differential equations, numerical methods, and data visualization. It also covers the basics of parallel programming, which is less common in such a course. Zelle argued that every computational scientist should know a bit about parallel programming, given the pragmatics of computing over massive data sets.

The second session of the afternoon dealt with issues of programming "in the small" versus "in the large". It seemed like a bit of a hodgepodge itself. The most entertaining of these talks was by Dennis Brylow of Marquette, called "Object Dis-Oriented". He said that his charge was to "take a principled stand that will generate controversy". Some in the room found this to be a novelty, but for Owen and me, and anyone in the SIGCSE crowd, it was another in a long line of anti-"objects first" screeds: Students can't learn how to decompose into methods until they know what goes into a method; students can't learn to program top-down, because then "it's all magic boxes".

I reported on a similarly entertaining panel at SIGCSE a couple of years ago. Brylow did give us something new, a phrase for the annals of anti-OO snake oil: to students who learn OOP first see their programs as

... full of exception gnomes and garbage collection fairies.

Owen asked the natural question, reductio ad absurdum: Why not teach gates then? The answer from the choir around the room was, good point, we have to choose a level, but that level is below objects -- focused on "the fundamentals". Sigh. Abacus-early, anyone?

Brylow also offered a list of what he thinks we should teach first, which contains some important ideas: a small language, a heavy focus on data representation, functional decomposition, and the fundamental capabilities of machine computation

This list tied well into the round-table discussion that followed, on what computational concepts science students should learn. I didn't get a coherent picture from this discussion, but one part stood out to me. Bruce Sherwood said that many scientists view analytical solution as privileged over simulation, because it is exact. He then pointed out that in some domains the situation is being turned on its head: a faithful discrete simulation is a more real depiction of the world than the closed-form analytical solution -- which is, in fact, only an approximation created at a time when our tools were more limited. The best quote of this session came from John Zelle: Continuity is a hack!

The day closed with another hodgepodge session on the role of data visualization. Ruth Chabay spoke about visualizing models, which in physics are as important as -- more important than!? -- data. Michael Coen gave a "great quote"-laden presentation that on the question of whether computer science is the servant of science or the queen of the sciences, a lá Gauss on math before him. Chris Hoffman gave a concise but convincing motivational talk:

  • There is great power in visualizing data.
  • With power comes risk, the risk of misleading.
  • Visualization can be tremendously effective.
  • Techniques for visual data analysis must account for the coming data deluge. (He gave some great examples...)
  • The challenges of massive data are coming in all of the sciences.

When Coen's and Hoffman's slides become available on-line, I will point to them. They would be worth a glance.

Hodgepodge sessions and hodgepodge days are okay. Sometimes we don't know where the best connections will come from...


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 15, 2007 8:34 PM

Workshop 1: Creating a Dialogue Between Science and CS

[A transcript of the SECANT 2007 workshop: Table of Contents]

The main session this morning was on creating a dialogue between science and CS. There seems to be a consensus among scientists and computer scientists alike that the typical introductory computer science course is not what other science students need, but what do they need? (Then again, many of us in CS think that the typical introductory computer science course is not what our computer science students need!)

Bruce Sherwood, a physics professor at North Carolina State, addressed the question,"Why computation in physics?" He said that this was one prong in an effort to admit to physics students "that the 20th century happened". Apparently, this is not common enough in physics. (Nor in computer science!) To be authentic to modern practice, even intro physics must show theory, experiment, and computation. Physicists have to build computational models, because many of their theories are too complex or have no analytical solution, at least not a complete one.

What excited me most is that Sherwood sees computation as a means for communicating the fundamental principles of physics, even its reductionist nature. He gave as an example the time evolution of Newtonian synthesis. The closed form solution shows students the idea only at a global level. With a computer simulation, students can see change happen over time. Even more, it can be used to demonstrate that the theory supports open-ended prediction of future behavior. Students never really see this when playing with analytical equations. In Sherwood's words, without computation, you lose the core of Newtonian mechanics!

He even argued that physics students should learn to program. Why?

  • So there are "no black boxes". He wants his students to program all the physics in their simulations.
  • So that they see common principles, in the form of recurring computations.
  • So that they can learn the relationship between the different representations they use: equations, code, animation, ...

More on science students and programming in a separate entry.

Useful links from his talk include comPADRE, a part of the National Science Digital Library for educational resources in physics and astronomy, and VPython, dubbed by supporters as "3D Programming for Ordinary Mortals". I must admit that the few demos and programs I saw today were way impressive.

The second speaker was Noah Diffenbaugh, a professor in earth and atmospheric sciences at Purdue. He views himself as a modeler dependent on computing. In the last year or so, he has collected 55 terabytes of data as a part of his work. All of his experiments are numerical simulations. He cannot control the conditions of the system he studies, so he models the system and runs experiments on the model. He has no alternative.

Diffenbaugh claims that anyone who wants a career in his discipline must be able to do computing -- as a consumer of tools, builder of models. He goes farther, calling himself a black sheep in his discipline for thinking that learning computing is critical to the intellectual development of scientists and non-scientists alike.

When most scientists talk of computation, they talk about a tool -- their tool -- and why it should be learned. They do not talk about principles of computing or the intellectual process one practices when writing a program. This concerns Diffenbaugh, who thinks that scientists must understand the principles of computing on which they depend, and that non-scientists must understand them, too, in order to under the work of scientists. Of course, scientists are the only ones who fixate on their computational tools to the detriment of discussing ideas. CS faculty do it, too, when they discuss CS1 in terms of the languages we teach. What's worse, though, is that some of us in CS do talk about principles of computing and intellectual process -- but only as the sheep's clothing that sneaks our favorite languages and tools and programming ideas into the course.

The session did include some computer scientists. Kevin Wayne of Princeton described an interdisciplinary "first course in computer science for the next generation of scientists and engineers". On his view, both computer science students and students of science and engineering are shortchanged when they do not study the other discipline. One of his colleagues (Sedgewick?) argues that there should be a common core in math, science, and computation for all science and engineering students, including CS.

What do scientists want in such a course? Wayne and his colleagues asked and found that they wanted the course to cover simulation, data analysis, scientific method, and transferrable programming skills (C, Perl, Matlab). That list isn't too surprising, even the fourth item. That is a demand that CS folks hear from other CS faculty and from industry all the time.

The course they have built covers the scientific method and a modern programming model built on top of Java. It is infused with scientific examples throughout. This include not examples from the hard sciences, such as sequence alignment, but also cool examples from CS, such as Google's page rank scheme. In the course, they use real data and so so experience the sensitivity to initial conditions in the models they build. He showed examples from financial engineering and political science, including the construction of a red/blue map of the US by percentage of the vote won by each candidate in each state. Data of this sort is available at National Atlas of the US, a data source I've already added to my list.

The fourth talk of the session was on the developing emphasis on modeling at Oberlin College, across all the sciences. I did not take as many notes on this talk, but I did grab one last link from the morning, to the Oberlin Center for Computation and Modeling. Occam -- a great acronym.

My main takeaway points from this session came from the talks by the scientists, perhaps because I know relatively less about what scientists think about and want from computer science. I found the examples they offered fascinating and their perspectives on computing to be surprisingly supportive. If these folks are at all representative, the dialogue between science and CS is ripe for development.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 15, 2007 6:59 PM

Workshop Intro: Teaching Science and Computing

[A transcript of the SECANT 2007 workshop: Table of Contents]

I am spending today and tomorrow at an NSF Workshop on Science Education in Computational Thinking put on by SECANT, a group at Purdue University funded by a grant from NSF's Pathways to Revitalized Undergraduate Computing Education (CPATH) program. SECANT's goals are to build a community that is asking and answering questions such as these:

  • What should science majors know about computing?
  • How can computer science be used to teach science?
  • Can we integrate computer science effectively into other majors?
  • What will the implications of answers to these questions be for how we teach computer science and engineering themselves?

The goal of this workshop is to begin building a community, to share ideas and to make connections. I'll share in my next few entries some of the ideas I encounter here, as well as some of the thoughts I have along the way. This entry is mostly about the background of the workshop and a few miscellaneous impressions.

First, I am impressed with the wide range of attendees. Folks come from big state schools such as Ohio State, Purdue, and Iowa, from private research schools such as Princeton and Notre Dame, and from small liberal arts schools such as Wartburg and Kalamazoo.

We started with introduction from the workshop organizers at Purdue and the NSF itself. Joseph Urban from NSF spoke a bit about the challenges addressed by the CPATH program. I think its most interesting goal is to move "beyond curriculum revision" to "institution transformation models" -- avoiding the curse of incremental change. This reminded me of something that Guy Kawasaki said in his talk The Art of Innovation: Revolution, then evolution. To completely change how we teach sciences and intro computer science -- revolution first, or evolution? Given the deep strain of academic conservatism that dominates most colleges and universities, this raises an interesting question about which approach will work best. From what I've seen here today, different schools are trying each, with various levels of success.

The introductory remarks by Jeff Vitter, dean of the College of Sciences -- and a computer scientist by training -- included a comment that is a theme underlying this workshop and driving the scientists who are here to explore computer science more deeply: Computing is now a fundamental component in the cycle of science: theory followed by experimentation. For many scientists, building models is the next step after experiment, or even a hand-in-hand partner to experiment. For many scientists, visualizing the results of experiments is essential -- we cannot understand them otherwise.

The workshop made a few personal connections for me. Also in attendance are neighbors of mine, Alberto Segre from Iowa and John Zelle from Wartburg College. But there are connections to my past, too. Another attendee is an old grad school colleague of mine, Pat Flynn, who is now at Notre Dame. FInally, from Urban's NSF presentation I learned that one of the big CPATH awards was made to a team at Michigan State -- including my old advisor. I'm not too surprised that his professional interests have evolved in this direction, though he might be.

Some here expressed surprise that so many folks are already doing interesting work in this arena. I wasn't, because there's been a lot of buzz in the last couple of years, but I was interested to see the diversity of new courses and new programs already in place. That is, of course, one of the great benefits of attending workshops such as this one.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 10, 2007 4:21 PM

Programming Challenges

Gerald Weinberg's recent blog Setting (and Character): A Goldilocks Exercise describes a writing exercise that I've mentioned here as a programming exercise, a pedagogical pattern many call Three Bears. This is yet another occurrence of a remarkably diverse pattern.

Weinberg often describes exercises that writers can use to free their minds and words. It doesn't surprise me that "freeing" exercises are built on constraints. In one post, Weinberg describes The Missing Letter, in which the writer writes (or rewrites) a passage without using a randomly chosen letter. The most famous example of this form, known as a lipogram, is La disparition, a novel written by Georges Perec without an using the letter 'e' -- except to spell the author's name on the cover.

When I read that post months ago, I immediately thought of creating programming exercises of a similar sort. As I quoted someone in a post on a book about Open Court Publishing, "Teaching any skill requires repetition, and few great works of literature concentrate on long 'e'." We can design a set of exercises in which the programmer surrenders one of her standard tools. For instance, we could ask her to write a program to solve a given problem, but...

  • with no if statements. This is exactly the idea embodied in the Polymorphism Challenge that my friend Joe Bergin and I used to teach a workshop at SIGCSE a couple of years ago and which I often find useful in helping programmers new to OOP see what is possible.

  • with no for statements. I took a big step in understanding how objects worked when I realized how the internal iterators in Smalltalk's collection classes let me solve repetition tasks with a single message -- and a description of the action I wanted to take. It was only many years later that I learned the term "internal iterator" or even just "iterator", but by then the idea was deeply ingrained in how I programmed.

    Recursion is the usual path for students to learn how to repeat actions without a for statement, but I don't think most students get recursion the way most folks teach it. Learning it with a rich data type makes a lot more sense.

  • with no assignment statements. This exercise is a double-whammy. Without assignment statements, there is no call explicit sequences of statements, either. This is, of course, what pure functional programming asks of us. Writing a big app in Java or C using a pure functional style is wonderful practice.

  • with no base types. I nearly wrote about this sort of exercise a couple of years ago when discussing the OOP anti-pattern Primitive Obsession. If you can't use base types, all you have left are instances of classes. What objects can do the job for you? In most practical applications, this exercise ultimately bottoms out in a domain-specific class that wraps the base types required to make most programming languages run. But it is a worthwhile practice exercise to see how long one can put off referring to a base type and still make sense. The overkill can be enlightening.

    Of course, one can start with an language that provides only the most meager set of base types, thus forcing one to build up nearly all the abstractions demanded by a problem. Scheme feels like that to most students, but only a few of mine seem to grok how much they can learn about programming by working in such a language. (And it's of no comfort to them that Church built everything out of functions in his lambda calculus!)

This list operates at the level of programming construct. It is just the beginning of the possibilities. Another approach would be to forbid the use of a data structure, a particularly useful class, or an individual base type. One could even turn this idea into a naming challenge by hewing close to Weinberg's exercise and forbidding the use of a selected letter in identifiers. As an instructor, I can design an exercise targeted at the needs of a particular student or class. As a programmer, I can design an exercise targeted at my own blocks and weaknesses. Sometimes, it's worth dusting off an old challenge and doing it for it's own sake, just to stay sharp or shake off some rust.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 27, 2007 5:31 PM

Another Reason to Support Education

(Update: I added a new closing paragraph, which got lost when editing the original post.)

... and especially, these days, in science and technology:

Human beings have had to guess about almost everything for the past million years or so. The leading characters in our history books have been our most enthralling, and sometimes our most terrifying, guessers. ...

And the masses of humanity through the ages, feeling inadequately educated just like we do now, and rightly so, have had little choice but to believe this guesser or that guesser.

-- Kurt Vonnegut, "A Man Without a Country"

And Vonnegut, humanist that he was, even had a university degree in chemistry. Today, everyone needs to understand science and technology to make sense of almost anything that happens in this world. But they also need to know history and political science and economics and finance, to understand the world. I also think that knowing a little about literature and the arts can help us make our way, in addition to making our lives more enjoyable.

Please forgive me if I continue to focus on the science and technology side of the equation. But keeping learning — and teaching others — whatever and wherever you can.

I do disagree with Vonnegut in at least one regard. He seems to think that, by relying on information, leaders can stop being guessers, that they can instead be deducers. But any substantial problem is too large to be solved purely by deduction, and the problems we face in the world are certainly more than just substantial. I think that informed guessing is the best we can do. Doing science and math prepares one to understand this, and to understand that even our informed guesses are always contingent. This is, of course, yet another reason to teach folks science and math.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 18, 2007 8:40 AM

Project-Based Computer Science Education

[Update: I found the link to Michael Mitzenmacher's blog post on programming in theory courses and added it below.]

A couple of days ago, a student in my compilers course was in my office discussing his team's parser project. He was clearly engaged in the challenges that they had faced, and he explained a few of the design decisions they had made, some of which he was proud of and some of which he was less thrilled with. In the course of conversation, he said that he prefers project courses because he learns best when he gets into the trenches and builds a system. He contrasted this to his algorithms course, which he enjoyed but which left him with a nagging sense of incompleteness -- because they never wrote programs.

(One can, of course, ask students to program in an algorithms course. In my undergraduate algorithms, usually half of the homework involves programming. My recent favorite has been a Bloom filters project. Michael Mitzenmacher has written about programming in algorithms courses in his blog My Biased Coin.)

I have long been a fan of "big project" courses, and have taught several different ones, among them intelligent systems, agile software development, and recently compilers. So it was with great interest I read (finally) Philip Greenspun's notes on improving undergraduate computer science education. Greenspun has always espoused a pragmatic and irreverent view of university education, and this piece is no different. With an eye to the fact that a great many (most?) of our students get jobs as professional software developers, he sums up one of traditional CS education's biggest weaknesses in a single thought: We tend to give our students

a tiny piece of a big problem, not a small problem to solve by themselves.

This is one of the things that makes most courses on compilers -- one of the classic courses in the CS curriculum -- so wonderful. We usually give students a whole problem, albeit a small problem, not part of a big one. Whatever order we present the phases of the compiler, we present them all, and students build them all. And they build them as part of a single system, capable of compiling a complete language. We simplify the problem by choosing (or designing) a smallish source language, and sometimes by selecting a smallish target machine. But if we choose the right source and target languages, students must still grapple with ambiguity in the grammar. They must still grapple with design choices for which there is no clear answer. And they have to produce a system that satisfies a need.

Greenspun makes several claims with which I largely agree. One is this:

Engineers learn by doing progressively larger projects, not by doing what they're told in one-week homework assignments or doing small pieces of a big project.

Assigning lots of well-defined homework problems is a good way to graduate students who are really good at solving well-defined homework problems. The ones who can't learn this skill change majors -- even if they would make good software developers.

Here is another Greenspun claim. I think that it is likely even more controversial among CS faculty.

Everything that is part of a bachelor's in CS can be taught as part of a project that has all phases of the engineering cycle, e.g., teach physics and calculus by assigning students to build a flight simulator.

Many will disagree. I agree with Greenspun, but to act on this idea would, as Greenspun knows, require a massive change in how most university CS departments -- and faculty -- operate.

This idea of building all courses around projects is similar to an idea I have written about many times here, the value of teaching CS in the context of problems that matter, both to students and to the world. One could teach in the context of a problem domain that requires computing without designing either the course or the entire curriculum around a sequence of increasingly larger projects. But I think the two ideas have more in common than they differ, and problem-based instruction will probably benefit from considering projects as the centerpiece of its courses. I look forward to following the progress of Owen Astrachan's Problem Based Learning in Computer Science initiative to see what role projects will play in its results. Owen is a pragmatic guy, so I expect that some valuable pragmatic ideas will come out of it.

Finally, I think students and employers alike will agree with Greenspun's final conclusion:

A student who graduates with a portfolio of 10 systems, each of which he or she understands completely and can show off the documentation as well as the function (on the Web or on the student's laptop), is a student who will get a software development job.

Again, a curriculum that requires students to build such a portfolio will look quite different from most CS curricula these days. It will also require big changes in how CS faculty teach almost every topic.

Do read Greenspun's article. He is thought-provoking, as always. And read the comments; they contain some interesting claims, too, including the suggestion that we redesign CS education as professional graduate degree program along the lines of medicine.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 16, 2007 6:45 AM

Some Thoughts on How to Increase CS Enrollments

The latest issue of Communications of the ACM (Volume 50, Number 10, pages 67-71) contains an article by Asli Yagmur Akbulut and Clayton Arlen Looney called Inspiring Students to Pursue Computing Degrees. With that title, how could I not jump to it and read it immediately?

Most of the paper describes a survey of the sort that business professors love to do but which I find quite dull. Still, both the ideas that motivate the survey and the recommendations the authors make at the end are worth thinking about.

First, Akbulut and Looney base their research on a model derived from social cognitive theory called the Major Choice Goals Model. In this model, a person's choice goal (such as the choice to major in computing) is influenced by her interest in the area, the rewards she expects to receive as a result of the choice, and her belief that she can succeed in the area, which is termed self-efficacy. Interest itself is influenced by expected rewards and self-efficacy, and expected rewards are influenced by self-efficacy.

Major Choice Goals Model

Their survey of business majors in an introductory business computing course found that choice goals were determined primarily by interest, and that the the other links in the model also correlated significantly. If their findings generalize, then...

  • The key to increasing the number of computing majors lies in increasing their interest in the discipline.
  • Talking to students about the financial and other rewards of majoring in computing influence their choice to major in the discipline only indirectly, through increased interest.
  • Self-efficacy -- a student's judgment of her capability to perform effectively in the majors -- strongly affects both interest and outcome expectations.

I don't suppose that these results are all that earth-shaking, but they do give us clues on where we might best focus our efforts to recruit more majors.

First, we need to foster "a robust sense of self-efficacy" in potential students. This is most effective when we work with people who have little or no direct experience. We should strive to help these folks have successful, positive first experiences. When we encounter people who have had bad past experiences with computing, we need to work extra hard to overcome these with positive exposure.

Second, we need to enhance students' outcome expectations in a broader set of outcomes than just the lure of high salaries and plentiful jobs. Most of us have been looking for opportunities to share salary and employment data with students. But outcome expectations seem to affect a student's choice of majoring in computing mostly through increased interest in the discipline, and financial reward is only one, rather narrow, avenue to interest. We should communicate as many different kinds of rewards as possible, via as many different routes as possible, including different kinds of people who have reaped these benefits such as peer groups, alumni, and various IT professionals.

Third, we can seek to increase interest more directly. Again, this is something that most people in CS and IT have already been doing. I think the value Akbulut and Looney add here is in looking to the learning literature for influences on interest. These include the effective use of "novelty, complexity, conflict, and uncertainty". They remind us that "As technologies continue to rapidly evolve, it is important to deliver course content that is fresh, current, and aligned with students' interests". Our students are looking for ideas that they can apply to their own experiences and to open problems in the world.

The authors also make a suggestion that is controversial with many CS faculty but basic knowledge to others: In order to build self-efficacy and interest in students, we need to be sure that

... the most appropriate faculty are assigned to introductory computing courses. Instructors who are personable, fair, innovative, engaging, and can serve as role models would be more likely to attract larger pools of students.

This isn't pandering; it is simply how the world works. As someone who now has a role in assigning faculty to teach courses, I know that this can be a difficult task, both in making the choices and in working with faculty who would prefer different assignments.

When I first dug into the paper, I had some reservations. I'm not a big fan of this kind of research, because it seems too contingent on too many external factors to be convincing on its own. This particular study looked at business students and a very soft sort of computing course (Introduction to Information Systems) that all business students have to take at many universities. Do the findings apply to CS students more generally, or students who might be interested in a more technical sense of computing? In the end, though, this paper gave me a different spin on a couple of issues with which we have been grappling, in particular on students' sense that they can succeed in computing and on the indirect relationship between expected rewards and choice of major. This perspective gives me something useful to work with.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 13, 2007 6:01 PM

More on Forth and a New Compilers Course

Remember this as a reader--whatever you are reading
is only a snapshot of an ongoing process of learning
on the part of the author.
-- Kent Beck

Sometimes, learning opportunities on a particular topic seem to come in bunches. I wrote recently about revisiting Forth and then this week ran across an article on Lambda the Ultimate called Minimal FORTH compiler and tutorial. The referenced compiler and tutorial are an unusually nice resource: a "literate code" file that teaches you as it builds. But then you also get the discussion that follows, which points out what makes Forth special, some implementation tricks, and links to several other implementations and articles that will likely keep me busy for a while.

Perhaps because I am teaching a compilers course right now, the idea that most grabbed my attention came in a discussion near the end of the thread (as of yesterday) on teaching language. Dave Herman wrote:

This also makes me think of how compilers are traditionally taught: lexing → parsing → type checking → intermediate language → register allocation → codegen. I've known some teachers to have success in going the other way around: start at the back end and move your way forward. This avoids giving the students the impression that software engineering involves building a product, waterfall-style, that you hope will actually do something at the very, *very* end -- and in my experience, most courses don't even get there by the end of the semester.

I have similar concerns. My students will be submitting their parsers on Monday, and we are just past the midpoint of our semester. Fortunately, type checking won't take long, and we'll be on to the run-time system and target code soon. I think students do feel satisfaction at having accomplished something along the way at the end of the early phases. The parser even give two points of satisfaction: when it can recognize a legal program (and reject an illegal on), and then when it produces an abstract syntax tree for a legal program. But those aren't the feeling of having compiled a program from end to end.

The last time I debriefed teaching this course, I toyed with the idea making several end-to-end passes through compilation process, inspired by a paper on 15 compilers in 15 weeks. I've been a little reluctant to mess with the traditional structure of this course, which has so much great history. While I don't want to be one of those professors who teaches a course "the way it's always been done" just for that sake, I also would like to have a strong sense that my different approach will give students a full experience. Teaching compilers only every third semester makes each course offering a scarce and thus valuable opportunity.

I suppose that there are lots of options... With a solid framework and reference implementation, we could cover the phases of the compiler in any order we like, working on each module independently and plugging them into the system as we go. But I think that there needs to be some unifying theme to the course's approach to the overall system, and I also think that students learn something valuable about the next phase in the pipeline when we build them in sequence. For instance, seeing the limitations of the scanner helps to motivate a different approach to the parser, and learning how to construct the abstract syntax tree sets students up well for type checking and conversion to an intermediate rep. I imagine that similar benefits might accrue when going backwards.

I think I'll ask Dave for some pointers and see what a backwards compiler course might look like. And I'd still like to play more with the agile idea of growing a working compiler via several short iterations. (That sounds like an excellent research project for a student.)

Oh, and the quote at the top of this entry is from Kent's addendum to his forthcoming book, Implementation Patterns. I expect that this book will be part of my ongoing process of learning, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 05, 2007 4:45 PM

Fear and Loathing in the Computer Lab

I occasionally write about how students these days don't want to program. Not only don't they want to do it for a living, they don't even want to learn how. I have seen this manifested in a virtual disappearance of non-majors from our intro courses, and I have heard it expressed by many prospective CS majors, especially students interested in our networking and system administration majors.

First of all, let me clarify something. When I say talk about students not wanting to program, one of my colleagues chafes, because he thinks I mean that this is an unchangeable condition of the universe. I don't. I think that the world could change in a way that kids grow up wanting to program again, the way some kids in my generation did. Furthermore, I think that we in computer science can and should help try to create this change. But the simple fact is that nearly all the students who come to the university these days do not want to write programs, or learn how to do so.

If you are interested in this issue, you should definitely read Mark Guzdial's blog. Actually, you should read it in any case -- it's quite good. But he has written passionately about this particular phenomenon on several occasions. I first read his ideas on this topic in last year's entry Students find programming distasteful, which described experiences with non-majors working in disciplines where computational modeling are essential to future advances.

This isn't about not liking programming as a job choice -- this isn't about avoiding facing a cubicle engaged in long, asocial hours hacking. This is about using programming as a useful tool in a non-CS course. It's unlikely that most of the students in the Physics class have even had any programming, and yet they're willing to drop a required course to avoid it.

In two recent posts [ 1 | 2 ], Mark speculates that the part of the problem involving CS majors may derive from our emphasis on software engineering principles, even early in the curriculum. One result is an impression that computer science is "serious":

We lead students to being able to create well-engineered code, not necessarily particularly interesting code.

One result of that result is that students speak of becoming a programmer as if this noble profession has its own chamber in one of the middle circles in Dante's hell.

I understand the need for treating software development seriously. We want the people who write the software we use and depend upon every day to work. We want much of it to work all the time. That sounds serious. Companies will hire our graduates, and they want the software that our graduates write to work -- all the time, or at least better than the software of their competitors. That sounds serious, too.

Mark points out that, while this requirement on our majors calls for students to master engineering practice, it does "not necessarily mesh with good science practice".

In general, code that is about great ideas is not typically neat and clean. Instead, the code for the great programs and for solving scientific problems is brilliant.

And -- here is the key -- our students want to be creative, not mundane.

Don't get me wrong here. I recently wrote on the software engineering metaphor as mythology, and now I am taking a position that could be viewed as blaming software engineering for the decline of computer science. I'm not. I do understand the realities of the world our grads will live in, and I do understand the need for serious software developers. I have supported our software engineering faculty and their curriculum proposals, including a new program in software testing. I even went to the wall for an unsuccessful curriculum proposal that created some bad political feelings with a sister institution.

I just don't want us to close the door to our students' desire to be brilliant. I don't want to close the door on what excites me about programming. And I don't want to present a face of computing that turns off students -- whether they might want to be computer scientists, or whether they will be the future scientists, economists, and humanists who use our medium to change the world in the ways of those disciplines.

Thinking cool ideas -- ideas that are cool to the thinker -- and making them happen is intellectually rewarding. Computer programming is a new medium that empowers people to realize their ideas in a way that has never been available to humankind before.

As Mark notes in his most recent article on this topic, realizing one's own designs also motivates students to want to learn, and to work to do it. We can use the power of our own discipline to motivate people to sample it, either taking what they need with them to other pastures or staying and helping us advance the discipline. But in so many ways we shoot ourselves in the foot:

Spending more time on comments, assertions, preconditions, and postconditions than on the code itself is an embarrassment to our field.

Amen, Brother Mark.

I need to do more to advance this vision. I'm moving slowly, but I'm on the track. And I'm following good leaders.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 03, 2007 5:24 PM

Walk the Wall, Seeger

Foley break Mayo

There is a great scene toward the end of one of my favorite movies, An Officer and a Gentleman. The self-centered and childlike protagonist, Zach Mayo, has been broken down by Drill Instructor Foley. He is now maturing under the Foley's tough hand. The basic training cohort is running the obstacle course for its last time. Mayo is en route to a course record, and his classmates are urging him on. But as his passes one of his classmates on the course, he suddenly stops. Casey Seeger has been struggling with wall for the movie, and it looks like she still isn't going to make it. But if she doesn't, she won't graduate. Mayo sets aside his record and stands with Seeger, cheering her and coaching her over the wall. Ultimately, she makes it over -- barely -- and the whole class gathers to cheer as Mayo and Seeger finish the run together. This is one of the triumphant scenes of the film.

I thought of this scene while running mile repeats on the track this morning. Three young women in the ROTC program were on the track, with two helping the third run sprints. The two ran alongside their friend, coaxing her and helping her continue when she clearly wanted to stop. If I recall correctly from my sister's time in ROTC, morning PT (physical training) is a big challenge for many student candidates and, as in An Officer and a Gentleman, they must meet certain fitness thresholds in order to proceed with the program -- even if they are in non-combat roles, such as nurses.

It was refreshing to see that sort of teamwork, and friendship, among students on the track.

It is great when this happens in one our classes. But when it does, it is generally an informal process that grows among students who were already friends when they came to class. It is not a part of our school culture, especially in computer science.

Some places, it is part of the culture. A professor here recently related a story from his time teaching in Taiwan. In his courses there, the students in the class identified a leader, and then they worked together to make sure that everyone in the class succeeded. This was something that students expected of themselves, not something the faculty required.

I have seen this sort of collectivism imposed from above by CS professors, particularly in project courses that require teamwork. In my experience, it rarely works well when foisted on students. The better students resent having their grade tied to a weaker student's, or a lazier one's. (Hey, it's all about the grade, right?) The weaker students resent being made someone else's burden. Maybe this is a symptom of the Rugged Individualism that defines the West, but working collectively is generally just not part of our culture.

And I understand how the students feel. When I found myself in situations like this as a student, I played along, because I did what my instructors asked me to do. And I could be helpful. But I don't think it ever felt natural to me; it was an external requirement.

Recently I found myself discussing pair programming in CS1 with a former student who now teaches for us. He is considering pairing students in the lab portion of his non-majors course. Even after a decade, he remembers (fondly, I think) working with a different student each week in my CS1 lab. But the lab constituted only a quarter of the course grade, and the lab exercises did not require long-term commitment to helping the weakest members of the class succeed. Even still, I had students express dissatisfaction at "wasting their time".

This is one of the things that I like about the agile software methods: it promotes a culture of unity and of teamwork. Pair programming is one practice that supports this culture, but so are collective ownership, continuous integration, and coding standard. Some students and programmers, including some of the best, balk at being forced into "team". Whatever the psychological, social, and political issues, and whatever my personal preferences as a programmer, there seems something attractive about a team working together to get better, both as a team and as individuals.

I wish the young women I saw this morning well. I hope they succeed, as a team and as individuals. They can make it over the wall.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

September 28, 2007 8:42 AM

Invent. Tell the Story.

I recently mentioned again Seth Godin's All Marketers are Liars in the context of teachers as liars. One last mention -- this time, for researchers and students.

As I read pages 29 and 30 of the book, I was struck by how much Godin's advice for marketers matches my experience as a researcher, first as a graduate student, then as a young faculty member, and now as a grizzled veteran. Consider:

There are only two things that separate success from failure in most organizations today:
  1. Invent stuff worth talking about.
  2. Tell stories about what you've invented.

That is the life of the academic researcher: invent cool stuff, and talk about the inventions. Some of my best professors were people who invented cool stuff and loved to talk about their inventions. They relished being in the lab, creating, and then crafting a story that shared their excitement. As a student, undergrad and grad alike, I was drawn to these profs, even when they worked in areas that didn't interest me much. When they did -- wow.

Many people get into research because we want to do #1, and #2 is just part of the deal. Whether the young researcher wants to or not, telling the stories is essential. It is how we spread our ideas and get the feedback that helps us to improve them. But on a more mercenary level it's also how we get folks interested in offering us tenure-track positions, and then offering us tenure.

Over the course of my career, I have come to realize how many people go into research because they want to do #2. As strange as it might sound, Getting a Ph.D. is one of the more attractive routes to becoming a professional story-teller, because it is the de facto credential for teaching at universities. Sometimes these folks continue to invent cool stuff to talk about. But some ultimately fall away from the research game. They want to tell stories, but without the external pressure to do #1. Maybe they lose the drive to invent, or never really had it in the first place. These folks often become great teachers, too, whether as instructors at research schools or as faculty at so-called "teaching universities". Many of those folks still have a passion for something like #1, but it tends toward learning about the new stuff that others create, synthesizing it, and preparing it for a wider audience. Then they tell the stories to their students and to the general public.

As I've written before, CS needs its own popular story teller, working outside the classroom, to share the thrill... I don't think that has to be an active researcher -- think about the remarkable effect that Martin Gardner had on the world by sharing real math with us in ways that made us want to do mathematics -- and even computer science! But having someone who continues to invent be that person would work just fine. Thank you, Mr. Feynman.

So, to my grad students and to graduate students everywhere, this is my advice to you: Invent stuff worth talking about, and then tell stories about what you've invented.

But this advice is not just for graduate students. Consider this passage from Godin, which I also endorse wholeheartedly:

On a personal level, your resume should be about inventing remarkable things and telling stories that register--not about how good you are at meeting specs. Organizations that are going to be around tomorrow will be those that stop spending all their time dealing with the day-to-day crises of shipping stuff out the door or reacting to emergencies. Instead the new way of marketing will separate winners from losers.

This is where the excitement and future of computer science in industry lie, too. Students who can (only) meet specs are plentiful and not always all that valuable. The real value comes in creating and integrating ideas. This is advice that I've been sharing with entrepreneurially-minded students for a while, and I think as time goes by it will apply to more and more students. Paul Graham has spent a lot of time spreading this message, in articles such as What You'll Wish You'd Known, and I've written about Graham's message here as well. The future belongs to people who are asking questions, not people who can deliver answers to other peoples' questions.

So, this advice is not just for students. It is for everyone.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

September 26, 2007 6:42 PM

Updates, Courtesy of My Readers

I love to hear from readers who have enjoyed an article. Often, folks have links or passages to share from their own study of the same issue. Sometimes, I feel a need to share those links with everyone. Here are three, in blog-like reverse chronological order:

On Devil's Advocate for Types

Geoff Wozniak pointed me in the direction of Gilad Bracha's work on pluggable type systems. I had heard of this idea but not read much about it. Bracha argues that a type system should be a wrapper around the dynamically typed core of a program. This makes it possible to expose different views of a program to different readers, based on their needs and preferences. More thinking to do...

On Robert Lucky and Math Blues

Chris Johnson, a former student of mine, is also a fan of Bob Lucky's. As a graduate student in CS at Tennessee, though, he qualifies for a relatively inexpensive IEEE student membership and so can get his fix of Lucky each month in Spectrum. Chris took pity on his old prof and sent me a link to Lucky's Reflections on-line. Thank you, thank you! More reading to do...

On Would I Lie to You?

Seth Godin's thesis is that all good marketers "lie" because they tell a story tailored to their audience -- not "the truth, the whole truth, and "nothing but the truth". I applied his thesis to CS professors and found it fitting.

As old OOSPLA friend and colleague Michael Berman reminds us, this is not a new idea:

Another noteworthy characteristic of this manual is that it doesn't always tell the truth.... The author feels that this technique of deliberate lying will actually make it easier for you to learn the ideas.

That passage was written by Donald Knuth in the preface to The TEXbook, pages vi-vii. Pretty good company to be in, I'd say, even if he is an admitted liar.

Keep those suggestions coming, folks!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

September 26, 2007 9:00 AM

Program, Teach, Sleep...

... choose any two.

In a recent blog entry, JRuby developer Charles Nutter claimed that, in general, "[g]ood authors do not have time to be good developers". This post has engendered a fair amount of discussion, but I'm not sure why. It shouldn't surprise anyone that staying on top of your game in one time-consuming discipline makes it hard, if not impossible to stay on top of your game in a second time-consuming discipline. There are so many hours in a day, and only so many brain cycles to spend learning and doing.

I face this trade-off, but between trying to be a good teacher and trying to be a good developer. Like authors, teachers are in a bind: To teach a topic well, we should do it well. To do it well takes time. But the time we spend learning and doing it well is time that we can't spend teaching well. The only chance we have to do both well is to spend most of our time doing only these two activities, but that can mean living a life out of balance.

If I had not run for 3-1/2 hours last Sunday, I suppose that I could have developed another example for my class or written some Ruby. But would I have been better off? Would my students? I don't know.

While I have considered the prospect of writing a textbook, I've not yet found a project that made me want to give up time from my teaching, my programming, my running, and my family. Like Nutter, I like to write code. This blog gives me an opportunity to write prose and to reach readers with my ideas, while leaving me the rest of the day to teach, to perhaps to help others learn to like to write code by my example.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 18, 2007 6:06 PM

A Great Feeling

A good friend sent me a note today that ended like this:

I feel like this has opened up a whole new world, a whole new way of thinking about programming. In fact, I haven't had such a feeling ... since my early days of first learning Pascal and the days of discovering data structures, for loops, recursion, etc...

I know this sensation and hope every person -- student and professional -- gets to feel it every so often. Students, seek it out by taking courses that promise more than just more of the same. Professionals, seek it out by learning a new kind of language, a new kind of web framework, a new kind of programming. This feeling is worth the effort, and makes your life better as a result.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 13, 2007 8:23 PM

How We Share Teaching Practice

... or When the Seventh Commandment Doesn't Apply

I have had Warren's Question, a paper by Sally Fincher and Josh Tenenberg, in my to-read/ folder for a while, since soon after it hit the ICER 2007 conference web site. (The paper will be presented on Saturday.) I grabbed it on my way to get a sandwich yesterday, for a nice lunchtime read. I am glad I did, and that surprises me. I didn't expect to enjoy it as much as I did.

Why wouldn't I enjoy it? Well, it's a CS education paper. That sounds wrong, I know, but "education papers" usually turn me off before I can get to the value. Section titles such as "The political context of self-disclosure" make my technical self cringe. I am sympathetic to the goals of CS education research, but the jargon and methodological baggage usually overwhelm me.

By skimming through the paper I was able to find the basic story, and it drew me in, and I soon realized that the paper makes some good points and raises some interesting questions.

The story of Warren's question is one of teachers sharing their practical knowledge. Fincher and Tenenberg deconstruct several messages on a mailing list of CS instructors, starting with a couple of questions from Warren seeking help on teaching interactively and on fixing a broken lab grading procedure. He receives answers off-list to some questions, and some answers on-list that answer obliquely but offer advice of a more general nature.

This is how most teaching knowledge is shared: one-on-one, or in small groups of friends. Many people refer to this style exchange as stealing. There is almost a badge of honor among some of my friends and colleagues to steal from as many accomplished, creative teachers as possible. I steal shamelessly from people such as Owen Astrachan, Robert Duvall, Joe Bergin, and Mike Clancy , and I've heard from others that they have stolen ideas from me. Good ideas spread through networks of friends and colleagues, sometimes jumping into new pools through journal publications and conference presentations.

Fincher and Tenenberg point out that this means of sharing knowledge has several shortcomings. It's not that stealing is wrong, or that most folks mind that their ideas have been lifted. I'm happy to hear that someone has found one of my ideas or techniques so useful that he has adapted it for his own use. My friends and I share ideas with the expectation that others might find them useful -- that's the point.

But informal sharing of this share results in a loss of provenance. In the arts, a work's provenance is the record of the work's history, in particular its ownership. This "audit trail" offers some measure of confidence in the authenticity of the work, which reduces the risk of being defrauded in exchange. When it comes to teaching practice, provenance isn't really about the risk of being sold a fake; it's more about trusting the efficacy of the technique in practice:

If we do not know the history of the practice we examine, then we take it as if new. We cannot tell whether this is long-established and well-evolved, worked on by respected educators over time, or whether it was fresh-minted yesterday. Not only that, but we cannot know why any adaptations, or changes, have been made.

If I knew a technique originated with Owen Astrachan, has been adapted by Nick Parlante for use in an object-oriented course, and has been "ported" to interactive classes by Robert Duvall, then I'll feel a lot more confident about using the technique in my interactive OOP classroom than if the technique comes to me blind, from a source I don't know -- and so, out of safety, trust less. In a network of friends, provenance is part of the group's collective memory. But groups evolve over time, and ideas leak out into into groups who do not share the memory.

This loss of provenance leads to a "rootlessness and reinvention of practice" that hampers the improvement of teaching everywhere. We all keep reinventing the wheel.

How does this differ from the research side of academia? In the research culture, stealing from others is strictly forbidden. One must cite the work of others and acknowledge the source of ideas. This plays several important roles in the culture, among them giving credit for the development of theories, authenticating the trail of ideas, differentiating work done in different contexts, and evaluating the efficacy of theories.

When I speak of lifting ideas as standard practice, I am exaggerating a bit. Stealing isn't really okay. Whenever I see someone using an idea I think I developed, I feel a violated, much as I do when someone steals my treasure. When we appropriate an idea, we should share credit. I may steal shamelessly, but I tell everyone where I got the candy. But when we share credit informally, we still lose the provenance of the idea. Publication and citing the work of others is still the primary way to document teaching practice formally. When there is no archival publication to cite, the author becomes the first line of history and has to document her sources in the new paper.

Documenting teaching practice, beyond documenting research results, would help us to teach computer science better, and that would help us to improve computer science and help the CS community serve the world better. This is one of the insights of the software patterns community: that documenting software development practice from the trenches can improve the state of software development. The writers' workshops of PLoP both help to share practice and to help practitioners communicate their practice more effectively.

The patterns culture came to the teaching world in the form of pedagogical patterns, which draw their inspiration from software patterns but aim to record the practice of experienced instructors. This movement started in the domains of software development and computer science, but that is an artifact of history. The techniques so far documented as pedagogical patterns are more general.

The CS education community has non-pattern venues for sharing teaching practice. In addition to refereed papers, SIGCSE has for several years offered nifty assignments sessions that open what once was a closed network of colleagues to the broader community, and that now offer a platform to a wider set of instructors. Likewise, at OOPSLA one once found workshops on teaching OO design and recently has offered a series of "killer examples" workshops.

Fincher and Tenenberg argue that, even if we had a more complete literature of teaching practice, the task of finding what one needs would be difficult. It is hard to specify all of the variables in play within most teaching scenarios, which makes it hard to index resources and then traverse the literature. In the language of patterns, teachers have a hard time characterizing the problem that a technique solves and the context in which the technique applies. I experienced this difficulty when trying to write pedagogical patterns, and observed it when I read pedagogical patterns written by others. How does one define the context and problem concretely enough to be useful? Without such boundaries, a pattern has more the character of a sweet sentiment than a teaching technique.

Researchers engage in abstraction and instantiation. So do teachers. They tell stories. Together, the story teller and the hearer generalize the story out of the teller's context and adapt it to work in other contexts.

I gain from being a part of networks of colleagues who discuss and share practice. I'm on a couple of mailing lists like the one described in "Warren's Question", and they all enrich my teaching, as well as my appreciation for computer science. I play a slightly different role on each list and enjoy a different status. On one particular, I am a peripheral member of a core group of friends who are master CS teachers. Over the years, I have become friends with several of them through various ways and thus came to be attached to the fringe of the group. On another, I am a core member and a leader. Whatever my role, I find that I learn much from my interactions with the group, as we share ideas and their results.

That's one of the reasons I like to blog. It gives me a venue in which to tell an occasional story about what I do and how it turns out in the trench with my students. Many of those ideas are "stolen" -- from friends one-on-one, from other blogs, and even from from other contexts. For example, I love to make connections between software development and teaching, and between software development and running. I like to look for ways to improve how I teach from writing, other arts, running and other sports, almost any source. But my blog gives me a way to give credit to the source of my ideas -- colleagues, authors, and whole other disciplines. A blog may not be as formal as archival publication, or as highly respected, but it does allow a more immediate exchange of ideas and a far wider conversation than I could ever have with just my friends.

As usual for me, reading "Warren's Question" is just a starting point. I have long known about Donald Schon's books on the reflective practitioner and heard of how good they are. But I must confess... I have never read Schon. Given the applicability of his ideas to so many ideas that have occupied me over the years -- apprenticeship and studio instruction among them, I really should be embarrassed. I was really intrigued by Fincher and Tenenberg's discussion of Schon's "hall of mirrors" and how one's ideas are reflected back when adopted and modified by others, and so I think I'll be making a short trip to the library soon!


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 05, 2007 7:48 PM

Math Blues

I used to be a member of the IEEE Computer Society. A few years, a combination of factors (including rising dues, a desire to cut back on paper journal subscriptions, a lack of time to read all the journals I was receiving) led me to drop my membership. In some ways, I miss receiving the institute's flagship publication, Spectrum. It was aimed more at "real" engineers than software types, but it was a great source of general information across a variety of engineering disciplines. My favorite column in Spectrum was Robert Lucky's "Reflections". It is written in a blog-like fashion, covering whatever neat ideas he has been thinking about lately in a conversational tone.

For some reason, this week I received a complimentary issue of Spectrum, and I immediately turned to "Reflections", which graced the last page. In this installment, Lucky writes about how math is disappearing from the practice of engineering, and this makes him sad. In the old days engineers did more of their own math, while these days they tend to focus on creating and using software to do those computations for them. But he misses the math, both doing it and thinking about it. Once he came to appreciate the beauty in the mathematics that underlies his corner of engineering, and now it is "as if my profession had slipped away while I wasn't looking". Thus his title, "Math Blues".

I appreciate how he must feel, because a lot of what used to be "fundamental" in computer science now seems almost quaint these days. I especially feel for folks who seem more attached to the old fundamentals, because today's world must seem utterly foreign to them. Of course, we can always try to keep our curriculum focused on those fundamentals, though students sometimes realize that we are living in the past.

I felt some math blues as I read Lucky's column, too, but of a different sort. Here is the passage that made me saddest:

I remember well the day in high school algebra class when I was first introduced to imaginary numbers. The teacher said that because the square root of a negative number didn't actually exist, it was called imaginary. That bothered me a lot. I asked, If it didn't exist, why give it a name and study it? Unfortunately, the teacher had no answers for these questions.

What great questions young Bob asked, and what an opportunity for his teacher to open the student to a world of possibilities. But it was a missed opportunity. Maybe the teacher just did not know how to explain such an advanced idea in a way that his young student could grasp. But I think it is likely that the teacher didn't understand, appreciate, or perhaps even know about that world.

Maybe you don't have to be a mathematician, engineer, or scientist to be able to teach math well. But one thing is for certain: not knowing mathematics at the level those professionals need creates a potential for shallowness that is hard to overcome. How much more attractive would a college major in science, math, or engineering look to high school students if they encountered deep and beautiful ideas in their courses -- even ideas that matter when we try to solve real problems?

Outreach from the university into our school systems can help. Many teachers want to do more and just need more time and other resources to make it happen. I think, though, that a systemic change in how we run our schools and in what we expect of our teaching candidates would go even farther.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 31, 2007 5:39 PM

Good Writing, Good Programming

This week I ran across a link to an old essay called Good Writing, by Marc Raibert. By now I shouldn't be so happy to be reminded how much good programming practice is similar to good writing in general. But I usually am. The details of Raibert's essay are less important to me than some of his big themes.

Raibert starts with something that people often forget. To write well, one must ordinarily want to write well and believe that one can. Anyone who teaches introductory computer science knows how critical motivation and confidence are. Many innovations in CS1 instruction over the years have been aimed at helping students to develop confidence in the face of what appear to be daunting obstacles, such as syntax, rigor, and formalism. Much wailing and gnashing of teeth has followed the slowly dawning realization that students these days are much less motivated to write computer programs than they have been over most of the last thirty years. Again, many innovations and proposals in recent years have aimed at motivating students -- more engaging problems, media computation, context in the sciences and social sciences, and so on. These efforts to increase motivation and confidence are corporate efforts, but Raibert reminds us that, ultimately, the individual who would be a writer must hold these beliefs.

After stating these preconditions, Raibert offers several pieces of advice that apply directly to computing. Not surprisingly, my favorite was his first: Good writing is bad writing that was rewritten. This fits nicely in the agile developer's playbook. I think that few developers or CS teachers are willing to say that it's okay to write bad code and then rewrite. Usually, when folks speak in terms of do the simplest thing that will work and refactor mercilessly, they do not usually mean to imply that the initial code was bad, only that it doesn't worry inordinately about the future. But one of the primary triggers for refactoring is the sort of duplication that occurs when we do the simplest thing that will work without regard for the big picture of the program. Most will agree that most such duplication is a bad thing. In these cases, refactoring takes a worse program and creates a better one.

Allowing ourselves to write bad code empowers us, just as it empowers writers of text. We need not worry about writing the perfect program, which frees us to write code that just works. Then, after it does, we can worry about making the program better, both structurally and stylistically. But we can do so with the confidence that comes from knowing that the substance of our program is on track.

Of course, starting out with the freedom to write bad code obligates us to re-write, to refactor, just as it obligates writers of text to re-write. Take the time! That's how we produce good code reliably: write and re-write.

I wish more of my freshmen would heed this advice:

The first implication is that when you start a new [program], there is nothing wrong with using bad writing. Your goal when you start is to get your ideas down on paper in any form you can.

For the novice programmer, I do not recommend writing ungrammatical or "stream of consciousness" code, but I do encourage them to take the ideas they have after having thought about the problem and expressing them in code. The best way to find out if an idea is a good one is to see it run in code.

Raibert's other advice also applies. When I read Spill the beans fast, I think of making my code transparent. Don't hide its essence in subterfuge that makes me seem clever; push its essence out where all can see it and understand the code. Many of the programmers whose code I respect most, such as Ward Cunningham, write code that is clear, concise, and not at all clever. That's part of what makes it beautiful.

Don't get attached to your prose is essential when writing prose, and I think it applies to code as well. Just because you wrote a great little method or class yesterday doesn't mean that it should survive in your program of tomorrow. While programming, you discover more about your problem and solution than you knew yesterday. I love Raibert's idea of a PRIZE_WINNING_STUFF.TXT file. I have a directory labeled playground/ where I place all the little snippets I've built as simple learning experiments, and now I think I need to create a winners/ directory right next to it!

Raibert closes with the advice to get feedback and then to trust your readers. A couple of months back I had a couple of entries on learning from critics, with different perspectives from Jerry Weinberg and Alistair Cockburn. That discussion was about text, not code (at least on the surface). But one thing that we in computer science need to do is to develop a stronger culture of peer review of our code. The lack of one is one of the things that most differentiates us from other engineering disciplines, to which many in computing look for inspiration. I myself look more to the creative disciplines for inspiration than to engineering, but on this the creative and engineering disciplines agree: getting feedback, using it to improve the work, and using it to improve the person who made the work are essential. I think that finding ways to work the evaluation of code into computer science courses, from intro courses to senior project courses, is a path to improving CS education.

PLoP 2007

This last bit of advice from Raibert is also timely, if bittersweet for me... In just a few days, PLoP 2007 begins. I am quite bummed that I won't be able to attend this year, due to work and teaching obligations. PLoP is still one of my favorite conferences. Patterns and writers' workshops are both close to my heart and my professional mind. If you have never written a software pattern or been to PLoP, you really should try both. You'll grow, if only when you learn how a culture of review and trust can change how you think about writing.

OOPSLA 2007

The good news for me is that OOPSLA 2007, which I will be attending, features a Mini-PLoP. This track consists of a pattern writing bootcamp, a writers' workshop for papers in software development, and a follow-up poster. I hope to attend the writers' workshop session on Monday, even if only as a non-author who pledges to read papers and provide feedback to authors. It's no substitute for PLoP, but it's one way to get the feeling.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

August 22, 2007 8:08 PM

Seeing and Getting the Power

After six months of increased blogging output, if not blog quality, August has hit me head on. Preparing for vacation. Vacation. Digging out of backlog from vacation. Getting ready for fall semester. The start of the semester.

At least now I have some time and reason to spend energy on my compilers course!

The algorithm geeks among you might enjoy this frequently-referenced and very cool demo of a graphics technique called Content-Aware Image Sizing, presented by Ariel Shamir at SIGGRAPH earlier this month (in San Diego just before I arrived for the above-mentioned vacation). I think that this will make a neat example for my algorithms the next time I teach it. (It's on tap for next spring.)

Algorithmically, the idea is cool but seems straightforward enough. To me, what is most impressive is having the idea in the first place. One of the things I love about following research in other areas of computer science -- and even in areas far beyond CS -- is seeing how thinkers who understand a domain deeply create and apply new ideas. This particular idea will be so useful on the web and in media of the future.

The guy who sent the link said it well:

SIGGRAPH has a way of reminding me that in many ways I'm the dumb kid at the back of the classroom.

I feel the same way sometimes, but I usually don't mind. There is much to learn.

Still, I hope that by the end of this semester my compilers students don't feel this way about compilers. I'd like them to appreciate that a compiler is "just another program" -- that they can learn techniques which make building a compiler straightforward and even fun. What looks like magic becomes understanding, without losing its magical aura.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 16, 2007 3:51 PM

Helping Developers Feel the Win of Agile Practices

Sometimes the best way to help someone learn a new habit is to let him or her feel the action happening, or not happening, in a new way. Sometimes, the action seems more natural when this feeling strikes a new chord.

I proposed an example of this approach many posts ago, in an entry called I Feel Good. I Feel Great. I Feel Wonderful. It reported an agile development fantasy I had after watching the Bill Murray flick What About Bob?. In my fantasy, I might use Dr. Leo Marvin's "Death Therapy" in an agile development scenario: Walk in one morning unannounced, and pull the plug on the project. An agile team should have something pretty reasonable to deliver. But would my students stone me before I could exit the room?

I managed to catch the beginning of a thread on the XP mailing list for once, a thread titled Your favorite teaching tricks?, launched by William Pietri. Unfortunately, this thread lasted only two messages, but both were valuable.

Pietri tells a story that displays one of his tricks for getting developers to write tests:

Friday night, I hung out with a pal who has been learning TDD. He is naturally full of questions, and one exchange went like this:

"If I have tested all the low-level methods, I don't need to test the components together, do I?"

"Did you run it to see if it worked?"

"Sure! I tried a few different options, and carefully looked at the output to be sure it was ok."

"If you had to check it, then you weren't sure it worked. Ergo, you should have written a test."

"Doh!"

Most developers already test in this way, writing simple little main programs that exercise their code. (Heaven help those who don't do at least this much -- and the people who use their code!) Why not automate that test? It's a small step from a handcrafted main to an xUnit test. And in my experience, that little main grows into an unmanageable mess. The xUnit test can grow gracefully into a suite of tests. (That doesn't mean you can't create an unmanageable mess, but at least we have a good book to help us get things right.) And from writing our post facto tests in xUnit, it's a somewhat larger but still small step to writing the test first.

J. B. Rainsberger followed up with a trick of his own, for working with customers:

I have one to help customers describe customer tests. If they seem not to want to do it, or have trouble, then when they ask me to build the feature, I immediately say "Done!"

They look puzzled. "You are not."

"Sure I am. Done!"

"You haven't done anything!"

"How do you know?"

The next thing out of their mouth is usually something we can easily turn into a customer test.

Very nice. I can't wait for one of my students to pull this on me for one of my underspecified assignments! Rainsberger does warn that this trick works best "only if you have a good personal relationship with the person acting as customer". So perhaps only those students who have developed a relationship of trust with me will want to venture this.

Of all the runaway threads on the XP list, this is one that I would like to have seen run longer. I'd love to hear others share some of their tricks at helping developers become more agile, whether in testing, pairing, refactoring, or any other practice.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 14, 2007 7:54 PM

Would I Lie to You?

On the plane back from beautiful southern California, I read Seth Godin's 2005 book All Marketers Are Liars. A few months ago, I wrote a short essay in which I considered the role of persuasion in teaching, and using what we know about how the brain works to reach students. Viewed cynically, this becomes more marketing than substance. Godin confronts this sort of cynicism about the business of marketing head on and refuses to shrink from connotation. To him, marketing is not hype and spam and telemarketers reading from scripts; those are mere bastardizations of a universal truth about buying and selling and stories. This book presents Godin's theory and practice of effective and responsible marketing. He accepts the verb "to lie", with a twist from how we usually apply it to marketing.

As I wrote in that earlier essay, and in several more since on stories, I believe something that Bruce Schneier and Kathy Sierra have said, that it's okay -- even wise -- to use what we know about how people think to help us change what people think. Sierra boldly suggests that this applies to what teachers do -- even teachers of computer science. So Godin's book left me wondering...

If some of teaching is marketing, and some of marketing is lying, is some of teaching lying?

Yes, I think it is, in the sense that Godin uses the term.

A teacher is telling a story. For this story to be effective, it must be one that the listener already believes, or at least almost believes. A good teacher meets her students where they are, working within their existing world view, starting with what they already know and believe. She works to help them to "buy into" the story, to actively participate in the story, to be complicit in telling the story to themselves.

Godin goes farther. We all hope that teachers rise above the stereotypical used car salesman by speaking truth. Godin tells us that, while the best marketers may not speak the whole truth and nothing but the truth all of the time, even they must be authentic. They must live lives congruent with the story they tell; they must believe it; they must be consistent in their message. We sometimes call this "practicing what you preach". I think that a teacher must tell an authentic story, which includes practicing what we teach.

There is, of course, another sense in which teachers are liars. Computer science is a wide and deep discipline. At the beginning of their studies, most students are not ready to hear or understand the full breadth and depth of computing. So we simplify the story, telling just the part of the truth that students are ready to understand, laying the groundwork that prepares them to go wider and deeper. Sometimes, we have to oversimplify, telling a story that is not strictly true, so that students will learn what they need to know now, preparing them for a more accurate story later.

Anyone who has taught any real programming language to CS1 students -- C++ and Java have been the most popular over the last 10-12 years -- knows what I mean. If I try to tell a 100% accurate story about Java constructors or access modifiers to my CS1 students, most will learn nothing at all. The story is too complex for their world view of programs. So I tell a circumscribed, simplified, and technically inaccurate story, as a means to help them learn.

We call these stories "simplifications", maybe even "oversimplifications", or more grandiosely "abstractions" -- but rarely do we call them lies. Yet they are incomplete stories that often fail when they are examined closely or stretched beyond their limits. They serve a purpose, and soon the student is ready for a more complete story, which we then tell. The student's world view grows. What she knows and believes comes a bit more into line with what I (and perhaps the textbook author) know and believe about the world, which usually reflects what the community of computer science scholars believes to be the most accurate story we can tell.

Of course, I don't want to leave students only with my view of the world; I want to help them develop the faculties they need to create their own viewpoint, to analyze new data and new ideas critically. But that is a long-term project, not something that is likely to happen in their earliest courses.

Note that this way of thinking makes the teacher's own authenticity even more important. Students soon realize that they are learning simplifications and abstractions. When we are authentic and consistent, when we practice what we teach, students can trust us and believe that our simplifications are worthwhile stepping stones. Then our stream of simplifications looks like unfolding truth, not just convenient falsehoods meant to fill class time.

Godin doesn't directly address instructors, but many of his lessons sound as if he knows our plight. Consider:

Some marketers focus so hard on the facts of their offering that they forget to tell a story at all, and then wonder why the fail.

And:

... there's a lot of teaching for marketers to do. Alas, there is no time to do it. [There are too many facts to communicate, and too many competitors for your audience's attention.] As a result, people pick and choose. Everyone will not listen to everything.

All Marketers Are Liars is another short and sweet book, under 180 pages. It is better written than Let's Kill Dick and Jane. Godin practices what he teaches, and that makes for a good story.

Or is that just the lie I tell myself after Godin tells me his story?


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 08, 2007 3:40 PM

Let's Kill and Dick and Jane

No, I've not become homicidal. That is the title of a recent book about the Open Court Publishing Company, which according to its subtitle "fought the culture of American education" by trying to change how our schools teach reading and mathematics. Blouke Carus, the creator of Open Court's reading program, sought to achieve an enviable goal -- engagement with and success in the world of great ideas for all students -- in a way that was beholden to neither the traditionalist "back to basics" agenda nor the progressivist "child-centered" agenda. Since then, the reading series has been sold to McGraw-Hill.

Thanks to the creator of the TeachScheme! project, Matthias Felleisen, I can add this book to my list of millions. He calls Let's Kill and Dick and Jane "TeachScheme! writ large". Certainly there are differences between the K-8 education culture and the university computer science culture, but they share enough commonalities to make reform efforts similarly difficult to execute. As I have noted before, universities and their faculty are a remarkably conservative lot. TeachScheme! is a principled, comprehensive redefinition of introductory programming education. In arguing for his formulation, Felleisen goes so far as to explain why Structure and Interpretation of Computer Programs -- touted by many, including me, as the best CS book ever written -- is not suitable for CS 1. (As much as I like SICP, Matthias is right.)

But TeachScheme! has not succeeded in the grand way its implementors might have hoped, for many of the reasons that Open Court's efforts have come up short of its founders' goals. Some of the reasons are cultural, some are historical, and some are probably strategic.

The story of Open Court is of immediate interest to me for our state's interest in changing K-12 math and science education in a fundamental way, a reform effort that my university has a leading role in, and which my department and I have a direct interest in. We believe in the need for more and better computer scientists and software developers, but university CS enrollments remain sluggish. Students who are turned off to science, math, and intellectual ideas in grade school aren't likely to select CS as a major in college... Besides, like Carus, I have a great interest in raising the level of science and math understanding across the whole population.

This book taught me a lot about what I had understood only incompletely as an observer of our education system. And I appreciated that it avoided the typical -- and wrong -- conservative/liberal dichotomy between the traditional and progressive approaches. America's education culture is a beast all its own, essentially anti-intellectual and exhibiting an inertia borne out of expectations, habit, and a lack of will and time to change. Changing the system will take a much more sophisticated and patient approach than most people usually contemplate.

Though I have never developed a complete curriculum for CS 1 as Felleisen has, I have long aspired to teaching intro CS in more holistic way, integrating the learning of essential tools with higher-level design skills, built on the concept of a pattern language. So Open Court's goals, methods, and results all intrigue me. Here are some of the ideas that caught my attention as I read the story of Open Court:

  • On variety in approach:

    A teacher must dare to be different! She must pull away from monotonous repetition of, 'Today we are going to write a story.' Most children view that announcement with exactly what it deserves, and there are few teachers who are not aware of what the reactions are.

    s/story/program/* and s/children/students/* to get a truth for CS instructors such as me.

  • A student is motivated to learn when her activities scratch her own itch.

  • Teaching techniques that may well transfer to my programming classroom: sentence lifting, writing for a real audience, proofreading and revising programs, and reading good literature from the beginning.

  • On a curriculum as a system of instruction:

    The quality of the Open Court program was a substantive strength and a marketing weakness. It required teachers to be conversant with a variety of methods. And the program worked best when used as a system... Teachers accustomed to trying a little of this and a little of that were likely to be put off by an approach that did not lend itself to tinkering.

    I guess I'm not the only person who has trouble sticking to the textbook. To be fair to us tinkerers, systematic integrated instructional design is so rare as to make tinkering a reasonable default stance.

  • On the real problem with new ideas for instruction:

    Thus Open Court's usual problem was not that it contradicted teachers' ideology, but that it violated their routine.

    Old habits dies hard, if at all.

  • In the study of how children learn to comprehend text, schema theory embodied the "idea that people understand what they read by fitting it into patterns they already know". I believe that this is largely true for learning to understand programs, too.

  • Exercising existing skills does not constitute teaching.

  • Quoting a paper by reading scholars Bereiter and Scardamalia, the best alternative model for a school is

    ... the learned professions ...[in which] adaptation to the expectations of one's peers requires continual growth in knowledge and competence.

    In the professions, we focus not only on level of knowledge but also on the process of continuously getting better.

  • When we despair of revolutionary change:

    There are circumstances in which it is best to package our revolutionary aspirations in harmless-looking exteriors.... We should swallow our pride and let [teachers] think we are bland and acceptable. We should fool them and play to their complacent pieties, But we should never for a moment fool ourselves.

    Be careful what you pretend to be.

  • Too often, radical new techniques are converted into "subject matter" to be added to the curriculum. But in that form they usually become overgeneralized rules that fail too often to be compelling. More insidious is that this "makes it possible to incorporate new elements into a curriculum without changing the basic goals and strategies of the curriculum". Voilá -- change with no change.

    Then we obsess about covering all the material, made harder by the inclusion of all this new material.

    I've seen this happen to software patterns in CS classrooms. It was sad. I usually think that we elementary patterns folks haven't done a good enough job yet, but Open Court's and TeachScheme!'s experiences do not encourage me to think that it will be easy to do well enough.

  • On the value of a broad education: This book tells of how the people at Open Court who were focused exclusively on curriculum found themselves falling prey to invalid plans from marketers.

    In an academic metaphor, [Open Court's] people had been the liberal-arts students looking down on the business school. But now that they needed business-school expertise, they were unable to judge it critically.

    Maybe a so-called "liberal education" isn't broad enough if it leaves the recipient unable to think deeply in an essential new context. In today's world, both business and technology are essential components of a broadly applicable education.

  • On the need for designing custom literature suitable for instruction of complete novices:

    Teaching any skill requires repetition, and few great works of literature concentrate on long "e".

    My greatest inspiration in this vein is the Suzuki literature developed for teaching violin and later piano and other instruments. I've experienced the piano literature first hand and watched my daughters work deep into the violin literature. At the outset, both use existing works (folk tunes, primarily) whenever appropriate, but they also include carefully designed pieces that echo the great literature we aspire to for our students. As the student develops technical skill, the teaching literature moves wholly into the realm of works with connection to the broader culture. My daughters now play real violin compositions from real composers.

  • But designing a literature and curriculum for students is not enough. Ordinary teachers can implement more aggressive and challenging curricula only with help: help learning the new ideas, help implementing the ideas in classroom settings, and help breaking the habits and structures of standing practice. Let's Kill Dick and Jane is full of calls from teachers for more help in understanding Open Court's new ideas and in implementing them well in the face of teachers' lack of experience and understanding. Despite its best attempts, Open Court never quite did this well enough.

    TeachScheme! is a worthy model in this regard. It has worked hard to train teachers in its approach, to provide tangible support, and to build a community of teachers.

    In my own work on elementary patterns, my colleague and friend Robert Duvall continually reminds us all of the need for providing practical support to instructors who might adopt our ideas -- if only they had the instructional tools to make a big change in how they teach introductory programming.

  • ... which leads me to a closing observation. One of the big lessons I take from the book is that to effect change, one must understand and confront issues that exist in the world, not the the issues that exist only in our idealized images of the world or in what are basically the political tracts of activists.

    In an odd way, this reminds me of Drawing on the Right Side of the Brain, with its precept that we should draw what we see, not what we think we see.

Let's Kill Dick and Jane is a slim volume, a mere 155 pages, and easy to read. It's not perfect, neither in its writing nor in its analysis, but it tells an important story well. I recommend it to anyone with aspirations of changing how we teach computer science to students in the university or high school. I also recommend it to anyone who is at all interested in the quality of our educational system. In a democracy such as the U.S., that should be everyone.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

July 19, 2007 9:27 AM

Copying the Masters

Ludwig van Beethoven, on copying Beethoven, in Copying Beethoven:

The world doesn't need another Beethoven. But it may need you.

Whether you are a composer copying Beethoven, a programmer copying Ward, or a pattern writer copying Alexander, learn from the masters, and then be yourself.

(On the movie itself: It is a highly fictionalized account of the last year of Beethoven's life. It is a good story that could have been a better movie, but Ed Harris is convincing as the master. Worth a couple of hours.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 17, 2007 8:17 PM

Mathematics, Problems, and Teaching

I'm surprised by how often I encounter the same topic in two different locations on the same day. The sources may be from disparate times, but they show up on my radar nearly coincident. Coincidence, perhaps.

Yesterday I ran across a link to an old article at Edge called What Kind of a Thing is a Number?. This is an interview with Reuben Hersh, a mathematician with a "cultural" view of mathematics. More on that notion later, but what caught my eye was Hersh's idea of how to teach math right:

A good math teacher starts with examples. He first asks the question and then gives the answer, instead of giving the answer without mentioning what the question was. He is alert to the body language and eye movements of the class. If they start rolling their eyes or leaning back, he will stop his proof or his calculation and force them somehow to respond, even to say "I don't get it." No math class is totally bad if the students are speaking up. And no math lecture is really good, no matter how beautiful, if it lets the audience become simply passive. Some of this applies to any kind of teaching, but math unfortunately is conducive to bad teaching.

Computer science isn't mathematics, but it, too, seems conducive to a style of teaching in which students are disengaged from the material. Telling students how to write a program is usually not all that enlightening for students; they need to do it to understand. Showing students how to write a program may be a step forward, because at least then they see a process in action and may have the gumption to ask "what?" and "why?" at the moments they don't understand. But I find that students often tune out when I demonstrate for too long how to do something. It's too easy for me to run ahead of what they know and can do, and besides, how I do something may not click in the right way with how they think and do.

The key for Hersh is "interaction, communication". But this creates a new sort of demand on instructors: they have to be willing to shut up, maybe for a while. This is uncomfortable for most faculty, who learned in classrooms where professors spoke and students took notes. Hersh tells a story in which he had to wait and wait, and then sit down and wait some more.

It turned out to be a very good class. The key was that I was willing to shut up. The easy thing, which I had done hundreds of times, would have been to say, "Okay, I'll show it to you." That's perhaps the biggest difficulty for most, nearly all, teachers--not to talk so much. Be quiet. Don't think the world's coming to an end if there's silence for two or three minutes.

This strategy presumes that students have something to say, and just need encouragement and time to say it. In a course like Hersh's, on problem solving for teachers, every student has a strategy for solving problems, and if the instructors goal is to bring out into the open different strategies in order to talk about them, that works great. But what about, say, my compilers course? This isn't touchy-feely; it has well-defined techniques to learn and implement. Students have to understand regular expressions and finite-state machines and context-free grammars and automata and register allocation algorithms... Do I have time to explore the students' different approaches, or even care what they are?

I agree with Hersh: If I want my students actually to learn how to write a compiler, then yes, I probably want to know how they are thinking, so that I can help them learn what they need to know. How I engage them may be different than sending them to the board to offer their home-brew approach to a problem, but engagement in the problems they face and with the techniques I'd like them to learn is essential.

This sort of teaching also places a new demand on students. They have to engage the material before they come to class. They have to read the assigned material and do their homework. Then, they have to come to class prepared to be involved, not just lean against a wall with a Big Gulp in their hands and their eyes on the clock. Fortunately, I have found that most of our students are willing to get involved in their courses and do their part. It may be a cultural shift for them, but they can make it with a little help. And that's part of the instructor's job, yes -- to help students move in the right direction?

That was one article. Later the same afternoon, I received ACM's regular mailing on news items and found a link to this article, on an NSF award received by my friend Owen Astrachan to design a new curriculum for CS based on... problems. Owen's proposal echoes Hersh's emphasis on problem-before-solution:

Instead of teaching students a lot of facts and then giving them a problem to solve, this method starts out by giving them a problem.... Then they have to go figure out what facts they need to learn to solve it.

This approach allows students to engage a real problem and learn what they need to know in a context that matters to them, to solve something. In the article, Owen even echoes the new demand made of instructors, being quiet:

With problem-based learning, the faculty person often stands back while you try to figure it out, though the instructor may give you a nudge if you're going the wrong way.

... and the new demand made of students, to actively engage the material:

And [the student] might spend a couple of weeks on a problem outside of class.... So you have to do more work as a student. It's kind of a different way of learning.

The burden on Astrachan and his colleagues on this grant is to find, refine, and present problems that engage students. There are lots of cool problems that might excite us as instructors -- from the sciences, from business, from the social sciences, from gaming and social networking and media, and so on -- but finding something works for a large body of students over a long term is not easy. I think Owen understands this; this is something he has been thinking about for a long time. He and I have discussed it a few times over the years, and his interest in redesigning how we teach undergraduate CS is one of the reasons I asked him to lead a panel at the OOPSLA 2004 Educators' Symposium.

Frank Oppenheimer's Exploratorium

This is also a topic I've been writing about for at least that long, including entries here on how Problems Are The Thing and before that on Alan Kay's talks ... at OOPSLA 2004! I think that ultimately Kay has the right idea in invoking Frank Oppenheimer's Exploratorium as inspiration: a wide-ranging set of problems that appeal to the wide-ranging interests of our students while at the same time bringing them "face to face with the first simple idea of science: The world is not always as it seems. This is a tall challenge, one better suited to a community working together (if one by one) than to a single researcher or small group alone. The ChiliPLoP project that my colleagues and I have been chipping away slowly on the fringes. I am looking forward to the pedagogical infrastructure and ideas that come from Owen's NSF project. If anyone can lay a proper foundation for problems as the centerpiece of undergrad CS, he and his crew can.

Good coincidences. Challenging coincidences.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 12, 2007 7:32 PM

Stay Focused

A former student recently wrote:

I am periodically reminded of a saying that is usually applied to fathers but fits teachers well -- when you are young it's amazing how little they know, but they get much smarter as I get older.

For a teacher, this sort of unsolicited comment is remarkably gratifying. It is also humbling. What I do matters. I have to stay on top of my game.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

July 11, 2007 7:02 PM

Heard at a Summit on K-12 Education

As I mentioned last month, the Board of Regents in my state has charged the public universities with a major reform effort aimed primarily at K-12 science and math education. My university, as the state's old teachers' college, is leading the effort. Today, we hosted a "summit" attended by researchers and practitioners at all three public universities, folks from other four-year colleges, K-12 science and math teachers, legislators, representatives from the state Department of Education, and many other folks. It was a big crowd.

Here are some of the things I heard in my break-out group of twelve:

  1. Education researchers talk a lot about students and teachers "talking about thinking" and "talking about learning".

  2. The teachers on the front lines face many of the same problems we face in the university, including students who would rather spend their time at their part-time jobs than doing homework in a challenging course.

  3. "Context matters."

  4. "How do we measure the quality of teaching?" Too often the answer from the Establishment is that it's a really hard problem to quantify, or that we can't quantify it all. Unfortunately, when we can't measure our "output" we are going to have a hard time knowing when we have succeeded or failed.

  5. "You don't need to be a physicist to teach physics." No, but you need to know some physics -- more and at a deeper level than what you hope to teach. And you need to be able to think like a physicist, and be able to do a little physics when the situation calls for it.

I can see now why this problem is so hard to solve. We can't specify our target very clearly, which makes it hard to agree on what to do to solve it, or to know if we have. There are so many different stakeholders with so many different ideas at stake. It's pretty daunting. I can see why some folks want to "start over" in the form of charter schools that can implement a particular set of ideas relatively free of the constraints of the education establishment and various other institutional and personal agendas.

My initial thought is that the best way to start is to start small. Pick a small target that you can measure, try an idea, get feedback, and then improve. Each time you meet a target, grow your ambitions by adding another "requirement" to the system. Do all of this in close consultation with parents and other "customers". This sounds "agile" in the agile software sense, but in a way it's just the scientific method at work. It will be slow, but it could make progress, whereas trying to wrestle the whole beast to the ground at once seems foolhardy and a waste of time. Starting from scratch in a new school (greenfield development) also seems a lot easier than working in an existing school (legacy development).


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 09, 2007 7:28 PM

Preparing for Fall Compilers Course (Almost)

Summer is more than half over. I had planned by now to be deep in planning for my fall compilers course, but the other work has kept me busy. I have to admit also to suffering from a bout of intellectual hooky. Summer is a good time for a little of that.

Compilers is a great course, in so many ways. It is one of the few courses of an undergraduate's curriculum in which students live long enough with code that is big enough to come face-to-face with technical debt. Design matters, implementation matters, efficiency matters. Refactoring matters. The course brings together all of the strands of the curriculum into a real project that requires knowledge from the metal up to the abstraction of language.

In the last few weeks I've run across several comments from professional developers extolling the virtues of taking a compilers course, and often lamenting that too many schools no longer require compilers for graduation. We are one such school; compilers is a project option competing with several others. Most of the others are perceived to be easier, and they probably are. But few of the others offer anything close to the sort of capstone experience that compilers does.

In a comment on this post titled Three Things I Learned About Software in College, Robert Blum writes:

Building OSes and building compilers are the two ends of the spectrum of applied CS. Learn about both, and you'll be able to solve most problems coming your way.

I agree, but a compilers course can also illuminate theoretical CS in ways that other courses don't. Many of the neat ideas that undergrads learn in an intro theory course show up in the first half of compilers, where we examine grammars and build scanners and parsers.

My favorite recent piece on compilers is ultra-cool Steve Yegge's Rich Programmer Food. You have to read this one -- promise me! -- but I will tease you with Yegge's own precis:

Gentle, yet insistent executive summary: If you don't know how compilers work, then you don't know how computers work. If you're not 100% sure whether you know how compilers work, then you don't know how they work.

Yegge's article is long but well worth the read.

As for my particular course, I face many of the same issues I faced the last time I taught it: choosing a good textbook, choosing a good source language, and deciding whether to use a parser generator for the main project are three big ones. If you have any suggestions, I'd love to hear from you. I'd like to build a larger, more complete compiler for my students to have as a reading example, and writing one would be the most fun I could have getting ready for the course.

I do think that I'll pay more explicit attention in class to refactoring and other practical ideas for writing a big program this semester. The extreme-agile idea of 15 compilers in 15 days, or something similar, still holds me in fascination, but at this point I'm in love more with the idea than with the execution, because I'm not sure I'm ready to do it well. And if I can't do it well, I don't want to do it at all. This course is too valuable -- and too much fun -- to risk on an experiment in whose outcome I don't have enough confidence.

I'm also as excited about teaching the course as the last time I taught it. On a real project of this depth and breadth, students have a chance to take what they have learned to a new level:

How lasts about five years, but why is forever.

(I first saw that line in Jeff Atwood's article Why Is Forever. I'm not sure I believe that understanding why is a right-brain attribute, but I do believe in the spirit of this assertion.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 07, 2007 7:10 AM

Quick Hits, Saturday Edition

Don't believe me about computational processes occurring in nature? Check out Clocking In And Out Of Gene Expression, via The Geomblog. Some genes turn other genes on and off. To mark time, they maintain a clock by adding ubiquitin molecules to a chain; when the chain reaches a length of five, the protein is destroyed. That sounds a lot like a Turing machine using a separate tape as a counter...

Becky Hirta learned something that should make all of us feel either better or worse: basic math skills are weak everywhere. We can feel better because it's not just our students, or we can feel worse because almost no one can do basic math. One need not be able to solve solve linear equations to learn how to write most software, but an inability to learn how to solve solve linear equations doesn't bode well.

Hey, I made the latest Carnival of the Agilists. The Carnival dubs itself "the bi-weekly blogroll that takes a sideways slice through the agile blogosphere". It's a nice way for me to find links to articles on agile software development that I might otherwise have missed.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 06, 2007 12:00 PM

Independence Day Reading

While watching a little Wimbledon on television the other day, I read a couple of items on my daunting pile of papers to read. Among the first was the on-line excerpt of Scott Rosenberg's recent book Dreaming in Code, about the struggles of the team developing the open-source "interpersonal information manager" Chandler. In the introduction, Rosenberg says about software:

Never in history have we depended so completely on a product that so few know how to make well.

In context, the emphasis is on "know how to make well". He was speaking of the software engineering problem, the knowledge of how to make software. But my thoughts turned immediately to what is perhaps a more important problem: "so few". The world depends so much on computer software, yet we can't seem to attract students to study computer science or (for those who think that is unnecessary) to want to learn how to make software. Many young people think that the idea of making software is beyond them -- too hard. But most don't think much about programming at all. Software is mundane, too ordinary. Where is the excitement?

Later I was reading Space for improvement, on "re-engaging the public with the greatest adventure of our time": space travel. Now space travel travel still seems pretty cool to me, one of the great scientific exercises of my lifetime, but polls show that most Americans, while liking the idea in the abstract, don't care all that much about space travel when it comes down to small matters of paying the bills.

The focus of the article is on the shortcomings of how NASA and others communicate the value and excitement of space travel to the public. It identifies three problems. The first is an "unrelenting positiveness" in PR, which may keep risk-averse legislators happy but gives the public the impression that space travel is routine. The second is a lack of essential information from Mission Control during launches and flights, information that would allow the PR folks tell a more grounded story. But author Bob Mahoney thinks that the third and most important obstacle in the past past has been a presumption that has run through NASA PR for many year's:

The presumption? That the public can't understand or won't appreciate the deeper technical issues of spaceflight. By assuming a disinterested and unintelligent public, PAO [NASA's s Public Affairs Office] and the mainstream media have missed out completely on letting the public share in the true drama inherent in space exploration.

If you presume a disinterested and unintelligent public, then you won't -- can't -- tell an authentic story. And in the case of space travel, the authentic story, replete with scientific details and human drama, might well snag the attention of the voting public.

I can't claim that that software development is "the greatest adventure of our time", but I think we in computing can learn a couple of things from reading this article. First, tell people the straight story. Trust them to understand their world and to care about things that matter. If the public needs to know more math and science to understand, teach them more. Second, I think that we should tell this story not just to adults, but to our children. The only way we can expect students to want to learn how to make software or to learn computer science is if they understand why these things matter and if they believe that they can contribute. Children are in some ways a tougher audience. They still have big imaginations and so are looking for dreams that can match their imagination, and they are pretty savvy when it comes to recognizing counterfeit stories.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 04, 2007 9:19 PM

Recursion, Natural Language, and Culture

M.C. Escher, 'Hands'

It's not often that one can be reading a popular magazine, even one aimed at an educated audience, and run across a serious discussion of recursion. Thanks to my friend Joe Bergin for pointing me to The Interpreter, a recent article in the The New Yorker by Reporter at Large John Colapinto. The article tells the story of the Pirahã, a native tribe in Brazil with a most peculiar culture and a correspondingly unusual language. You see, while we often observe recursion in nature, one of the places we expect to see it is in natural language -- in the embedding of sentence-like structures within other sentences. But the Pirahã don't use recursion in their language, because their world view makes abstract structure meaningless.

Though recursion plays a critical role in Colapinto's article, it is really about recursion; it is about a possible crack in Chomsky's universal grammar hypothesis about language, and some of the personalities and technical issues involved. Dan Everett is a linguist who has been working with the Pirahã since the 1970s. He wrote his doctoral dissertation on how the Pirahã language fit into the Chomsky, but upon further study and a new insight now "believes that Pirahã undermines Noam Chomsky's idea of a universal grammar." As you might imagine, Chomsky and his disciples disagree.

What little I learned about the Pirahã language makes me wonder at what it must be like to learn it -- or try to. One the one hand, it's a small language, with only eight consonants and three vowels. But that's just the beginning of its simplicity:

The Pirahã, Everett wrote, have no numbers, no fixed color terms, no perfect tense, no deep memory, no tradition of art or drawing, and no words for 'all', 'each', 'every', 'most', or 'few' -- terms of quantification believed by some linguists to be among the common building blocks of human cognition. Everett's most explosive claim, however, was that Pirahã displays no evidence of recursion, a linguistic operation that consists of inserting one phrase inside another of the same type..."

This language makes Scheme look like Ada! Of course, Scheme is built on recursion, and Everett's claim that the Pirahã don't use it -- can't, culturally -- is what rankles many linguists the most. Chomsky has built the most widely accepted model of language understanding on the premise that "To come to know a human language would be an extraordinary intellectual achievement for a creature not specifically designed to accomplish this task." And at the center of this model is "the capacity to generate unlimited meaning by placing one thought inside another", what Chomsky calls "the infinite use of finite means", after the nineteenth-century German linguist Wilhelm von Humboldt.

According to Everett, however, the Pirahã do not use recursion to insert phrases one inside another. Instead, they state thoughts in discrete units. When I asked Everett if the Pirahã could say, in their language, "I saw the dog that was down by the river get bitten by a snake", he said, "No. They would have to say, 'I saw the dog. The dog was at the beach. A snake bit the dog.'" Everett explained that because the Pirahã accept as real only that which they observe, their speech consists only of direct assertions ("The dog was at the beach."), and he maintains that embedded clauses ("that was down by the river") are not assertions but supporting, quantifying, or qualifying information -- in other words, abstractions.

The notion of recursion as abstraction is natural to us programmers, because inductive definitions are by their nature abstractions over the sets they describe. But I had never before thought of recursion as a form of qualification. When presented in the form of an English sentence such as "I saw the dog that was down by the river get bitten by a snake", it makes perfect sense. I'll need to think about whether it makes sense in a useful for my programs.

Here is one more extended passage from the article, which discusses an idea from Herb Simon, which appears in the latest edition of the Simon book I mentioned in my last entry:

In his article, Everett argued that recursion is primarily a cognitive, not a linguistic, trait. He cited an influential 1962 article, "The Architecture of Complexity," by Herbert Simon, a Nobel Prize-winning economist, cognitive psychologist, and computer scientist, who asserted that embedding entities within like entities (in a recursive tree structure of the type central to Chomskyan linguistics) is simply how people naturally organize information. ... Simon argues that this is essential to the way humans organize information and is found in all human intelligence systems. If Simon is correct, there doesn't need to be any specific linguistic principle for this because it's just general cognition." Or, as Everett sometimes likes to put it: "The ability to put thoughts inside other thoughts is just the way humans are, because we're smarter than other species." Everett says that the Pirahã have this cognitive trait but that it is absent from their syntax because of cultural constraints.

This seems to be a crux in Everett's disagreement with the Chomsky school: Is it sufficient -- even possible -- for the Pirahã to have recursion as a cognitive trait but not as a linguistic trait? For many armchair linguists, the idea that language and thought go hand in hand is almost an axiom. I can certainly think recursively even when my programming language doesn't let me speak recursively. Maybe the Pirahã have an ability to organize their understanding of the world using nested structures (as Simon says they must) without having the syntactic tools for conceiving such structures linguistically (as Everett says they cannot).

I found this to be a neat article for more reasons than just its references to recursion. Here are few other ideas that occurred as I read.

Science and Faith Experience

At UNICAMP (State Univ. of Campinas in Brazil), in the fall of 1978, Everett discovered Chomsky's theories. "For me, it was another conversion experience," he said.

Everett's first conversion experience happened when he became a Christian in the later 1960s, after meeting his wife-to-be. It was this first conversion that led him to learn linguistics in the first place and work with the Pirahã under the auspices of the Summer Institute of Linguistics, an evangelical organization. He eventually fell away from his faith but remained a linguist.

Some scientists might balk at Everett likening his discovery of Chomsky to a religious conversion, but I think he is right on the mark. I know what it's like as a scholar to come upon a new model for viewing the world and feeling as if I am seeing a new world entirely. In grad school, for me it was the generic task theory of Chandrasekaran, which changed how I viewed knowledge systems and foreshadowed my later move into the area of software patterns.

It was interesting to read, even briefly, the perspective of someone who had undergone both a religious conversion and a scientific conversion -- and fallen out of both, as his personal experiences created doubts for which his faiths had no answers for him.

Science as Objective

Obvious, right? No. Everett has reinterpreted data from his doctoral dissertation now that he has shaken the hold of his Chomskyan conversion. Defenders of Chomsky's theory say that Everett's current conclusions are in error, but he now says that

Chomsky's theory necessarily colored his [original] data-gathering and analysis. "'Descriptive work' apart from theory does not exist. We ask the questions that our theories tell us to ask.

Yes. When you want to build generic task models of intelligent behavior, you see the outlines of generic tasks wherever you look. You can tell yourself to remain skeptical, and to use an objective eye, but the mind has its own eye.

Science is a descriptive exercise, and how we think shapes what we see and how we describe. Do you see objects or higher-order procedures when you look at a problem to describe or when you conceive a solution? Our brains are remarkable pattern machines and can fall into the spell of a pattern easily. This is true even in a benign or helpful sense, such as what I experienced after reading an article by Bruce Schneier and seeing his ideas in so many places for a week or so. My first post in that thread is here, and the theme spread throughout this blog for at least two weeks thereafter.

Intellectually Intimidating Characters

Everett occupied an office next to Chomsky's; he found the famed professor brilliant but withering. "Whenever you try out a theory on someone, there's always some question that you hope they won't ask," Everett said. "That was always the first thing Chomsky would ask.

That is not a fun feeling, and not the best way for a great mind to help other minds grow -- unless used sparingly and skillfully. I've been lucky that most of the intensely bright people I've met have had more respect and politeness --and skill -- to help me come along on the journey, rather than to torch me with their brilliance at every opportunity.

Culture Driving Language

One of the key lessons we see from the Pirahã is that culture is a powerful force, especially a culture so long isolated from the world and now so closely held. But you can see this phenomenon even in relatively short-term educational and professional habits such as programming styles. I see it when I teach OO to imperative programmers, and when I teach functional programming to imperative OO programmers. (In a functional programming course, the procedural and OO programmers realize just how similar their imperative roots are!) Their culture has trained them not to use the muscles in their minds that rely on the new concepts. But those muscles are there; we just need to exercise them, and build them up so they are as strong as the well-practiced muscles.

What Is Really Universal?

Hollywood blockbusters, apparently:

That evening, Everett invited the Pirahã to come to his home to watch a movie: Peter Jackson's remake of "King Kong". (Everett had discovered that the tribe loves movies that feature animals.) After nightfall, to the grinding sound of the generator, a crowd of thirty or so Pirahã assembled on benches and on the wooden floor of Everett's [house]. Everett had made popcorn, which he distributed in a large bowl. Then he started the movie, clicking ahead to the scene in which Naomi Watts, reprising Fay Wray's role, is offered as a sacrifice by the tribal people of an unspecified South Seas island. The Pirahã shouted with delight, fear, laughter, and surprise -- and when Kong himself arrived, smashing through the palm trees, pandemonium ensued. Small children, who had been sitting close to the screen, jumped up and scurried into their mothers' laps; the adults laughed and yelled at the screen.

The Pirahã enjoy movies even when the technological setting is outside their direct experience -- and for them, what is outside their direct experience seems outside their imagination. The story reaches home. From their comments, the Pirahã seemed to understand King Kong in much the way we did, and they picked up on cultural clues that did fit into their experience. A good story can do that.

Eugene sez, The Interpreter, is worth a read.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

June 29, 2007 11:39 PM

Research, Prestige, and an Undergraduate Education

Philip Greenspun recently posted a provocative blog entry called Why do high school kids keep signing up to be undergrads at research universities? If you've never read any of Philip's stuff, this might seem like an odd and perhaps even naive piece. His claim is pretty straightforward: "Research universities do not bother to disguise the fact that promotion, status, salary, and tenure for faculty are all based on research accomplishments," so why don't our brightest, most ambitious high school students figure out that these institutions aren't really about teaching undergraduates? This claim might seem odd considering that Philip himself went to MIT and now teaches as an adjunct prof there. But he has an established track record of writing about how schools like Harvard, MIT, the Ivies, and their ilk could do a better job of educating undergrads, and at a lower cost.

My thoughts on this issue are mixed, though at a certain level I agree with his premise. More on how I agree below.

As an undergraduate, I went to a so-called regional university, one that grants Ph.D.s in many fields but which is not typical of the big research schools Philip considers. I chose the school for its relatively strong architecture school, which ranked in the top 15 or 20 programs nationally despite being at a school that overall catered largely to a regional student population. There I was part of a good honors college and was able to work closely with published scholars in a way that seems unlikely at a Research U. However, I eventually changed my major and studied computer science accounting. The accounting program had a good reputation, but its computer science department was average at best. It had a standard curriculum, and I was a good enough student and had enough good profs that I was able to receive a decent education and to have my mind opened to the excitement of doing computer science as an academic career. But when I arrived at grad school I was probably behind most of my peers in terms of academic preparation.

I went to a research school for my graduate study, though not one in the top tier of CS schools. It was at that time, I think, making an effort to broaden, deepen, and strengthen its CS program (something I think it has done). The department gave me great financial support and opportunities to teach several courses and do research with a couple of different groups. The undergrad students I taught and TAed sometimes commented that they felt like they were getting a better deal out of my courses than they got out of other courses at the university, but I was often surprised by how committed some of the very best researchers in the department were to their undergrad courses. Some of the more ambitious undergrads worked in labs with the grad students and got to know the research profs pretty well. At least one of those students is now a tenured prof in a strong CS program down south.

Now I teach at a so-called comprehensive university, one of those medium-sized state schools that offers neither the prestige of the big research school nor the prestige of an elite liberal arts school. We are in a no-man's land in other ways as well -- our faculty are expected to do research, but our teaching expectations and resources place an upper bound on what most faculty can do; our admissions standards grant access to a wider variety of students, but such folks tend to require a more active, more personal teaching effort.

What Greenspun says holds the essence of truth in a couple of ways. The first is that a lot of our best students think that they can only get a good education at one of the big research schools. That is almost certainly not true. The variation in quality among the programs at the less elite schools is greater, which requires students and their parents to be perhaps more careful in selecting programs. It also requires the schools themselves to do a better job communicating where their quality programs lie, because otherwise people won't know.

But a university such as mine can assemble a faculty that is current in the discipline, does research that contributes value (even basic knowledge), and cares enough about its mission to teach to devote serious energy to the classroom. I don't think that a comprehensive's teaching mission in any speaks ill of a research school faculty's desire to teach well but, as Greenspun points out, those faculty face strong institutional pressure to excel in other areas. The comprehensive school's lower admission standards means that weaker students have a chance that they couldn't get elsewhere. Its faculty's orientation means that stronger have a chance to excel in collaboration with faculty who combine interest and perhaps talent in both teaching and research.

If the MITs and Harvards don't excel in teaching undergrads, what value to they offer to bright, ambitious high school students? Commenters on the article answered in a way that sometimes struck me as cynical or mercenary, but I finally realized that perhaps they were simply being practical. Going to Research U. or Ivy C. buys you connections. For example:

Seems pretty plain that he's not looking to buy the educational experience, he's looking to buy the peers and the prestige of the university.

And in my experience of what school is good for, he's making the right decision.

You wanna learn? Set up a book budget and talk your way into or build your own facilities to play with the subject you're interested in. Lectures are a lousy way to learn anyway.

But you don't go to college to learn, you go to college to make the friends who are going to be on a similar arc as you go through your own career, and to build your reputation by association....

And:

You will meet and make friends with rich kids with good manners who will provide critical angel funding and business connections for your startups.

Who cares if the undergrad instruction is subpar? Students admitted to these schools are strong academically and likely capable of fending for themselves when it comes to content. What these students really need is a frat brother who will soon be an investment banker in a major NYC brokerage.

It's really unfair to focus on this side of the connection connection. As many commenters also pointed out, these schools attract lots of smart people, from undergrads to grad students to research staff to faculty. And the assiduous undergrad gets to hang around with them, learning from them all. Paul Graham would say that these folks make a great pool of candidates to be partners in the start-up that will make you wealthy. And if strong undergrad can fend for him- or herself, why not do it at Harvard or MIT, in a more intellectual climate? Good points.

But Greenspun offers one potential obstacle, one that seems to grow each year: price. Is the education an undergrad receives at an Ivy League or research school, intellectual and business connections included, really worth $200,000? In one of his own comments, he writes:

Economists who've studied the question of whether or not an Ivy League education is worth it generally have concluded that students who were accepted to Ivy League schools and chose not to attend (saving money by going to a state university, for example) ended up with the same lifetime income. Being the kind of person who gets admitted to Harvard has a lot of economic value. Attending Harvard turned out not to have any economic value.

I'm guessing, though, that most of these students went to a state research university, not to a comprehensive. I'd be curious to see how the few students who did opt for the less prestigious but more teaching-oriented school fared. I'm guessing that most still managed to excel in their careers and amass comparable wealth -- at least wealth enough to live comfortably.

I'm not sure Greenspun thinks that everyone should agree with his answer so much as that they should at least be asking themselves the question, and not just assuming the prestige trumps educational experience.

This whole discussion leads me to want to borrow a phrase from Richard Gabriel that he applies to talent and performance as a writer. The perceived quality of your undergraduate institution does not determine how good you can get, only how fast you get can good.

I read Greenspun's article just as I was finishing reading the book Teaching at the People's University, by Bruce Henderson. This book describes the history and culture of the state comprehensive universities, paying special attention to the competing forces that on the one hand push their faculty to teach and serve an academically diverse student body and on the other expects research and the other trappings of the more prestigious research schools. Having taught at a comprehensive for fifteen years now, I can't say that the book has taught me much I didn't already know about the conflicting culture of these schools, but it paints a reasonably accurate picture of what the culture is like. It can be a difficult environment in which to balance the desire to pursue basic research that has a significant effect in the world and the desire to teach a broad variety of students well.

There is no doubt that many of the students who enroll in this sort of school are served well, because otherwise they would have little opportunity to receive a solid university education; the major research schools and elite liberal arts schools wouldn't admit them. That's a noble motivation and it provides a valuable service to the state, but what about the better students who choose a comprehensive? And what of the aspirations of faculty who are trained in a research-school environment to value their careers by the intellectual contribution they make to their discipline? Henderson does a nice job laying these issues out for people to consider explicitly, rather than to back into them when their expectations are unmet. This is not unlike what Greenspun does in his blog entry, laying an important question on the line that too often goes unasked until the answer is too late to matter.

All this said, I'm not sure that Greenspun was thinking of the comprehensives at all when he wrote his article. The only school he mentions as an alternative to MIT, Harvard, and the other Ivies is the Olin College of Engineering, which is a much different sort of institution than a mid-level state school. I wonder whether he would suggest that his young relative attend one of the many teacher-oriented schools in his home state of Massachusetts?

After having experienced two or three different kinds of university, would I choose a different path for myself in retrospect? This sort of guessing game is always difficult to play, because I have experienced them all under different conditions, and they have all shaped me in different ways. I sometimes think of the undergraduates who worked in our research lab while I was in grad school; they certainly had broader and deeper intellectual experiences than I had as as undergraduate. But as a first-generation university attendee I grew quite a bit as an undergraduate and had a lot of fun doing it. Had I been destined for a high-flying academic research career, I think I would have had one. Some of my undergrad friends have done well on that path. My ambition, goals, and inclinations are well suited for where I've landed; that's the best explanation for why I've landed here. Would my effect on the world have been greater had I started at a Harvard? That's hard to say, but I see lots of opportunities to contribute to the world from this perch. Would I be happier, or a better citizen, or a better father and husband? Unlikely.

I wish Greenspun's young relative luck in his academic career. And I hope that I can prepare my daughters to choose paths that allow them to grow and learn and contribute.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

June 26, 2007 7:46 PM

Digging a Hole Just to Climb Out

I remember back in my early years teaching (*) I had a student who came in to ask a question about a particular error message she had received from our Pascal compiler. She had some idea of what caused it, and she wanted to know what it meant. It was a pretty advanced error, one we hadn't reached the point of making in class, so I asked her how she had managed to bump into it. Easy, she said; she was intentionally creating errors in her program so that she could figure out what sort of error messages would result.

If you teach programming for very long, you are bound to encounter such a student. She was doing fine in class but was afraid that she was having life too easy and so decided to use her time productively -- creating errors that she could learn to debug and maybe even anticipate.

I've written many times before about practice, including practice for practice's sake. That entry was about the idea of creating "valueless software", which one writes as a learning exercise, not for anyone's consumption. But my forward-thinking student was working more in the vein of fixin' what's broke, in which one practices in an area of known weakness, with the express purpose of making that part of one's game strong. My student didn't know many of the compiler error messages that she was going to face in the coming weeks, so she set out to learn them.

I think that she was actually practicing a simple form of an even more specific learning pattern: consciously seeking out, even creating, challenges to conquer. Make a Mess, Clean it Up!, is a neat story about an example of this pattern in the history of the Macintosh team. There, Donn Denman talks about Burrell Smith's surprising way of getting better at Defender, a video game played by the Mac programmers as a way to relax or pump themselves up:

Instead of avoiding the tough situations, he'd immediately create them, and immediately start learning how to handle the worst situation imaginable. Pretty soon he would routinely handle anything the machine could throw at him.

He'd lose a few games along the way, but soon he was strong in areas of the game that his competitors may not have even encountered before -- because they had spent time avoiding difficult times! Denman saw in this game-playing behavior something he recognized in Smith's life as a programmer: he

... likes challenges so much that he actually seeks them out and consciously creates them. In the long run, this approach makes sense. He seems to aggressively set up challenging situations throughout his life. Then, when life throws him a curve ball, he'll swing hard, and knock it out of the park.

The article uses two metaphors for this pattern: make a mess so that you can clean it up, and choose to face tough situations so that you are ready for the curve balls life throws you. (I guess those tough situations must be akin the nasty breaking stuff of a major-league pitcher.) My title for this entry offers a third metaphor: digging yourself into a hole so that you can how to get out of a hole. As much a baseball fan as I was growing up, the metaphor of digging oneself into a hole was the more common. Whatever it's name, the idea is the same.

I find that I'm more likely to apply this pattern in some parts of my life than others. In programming, I can recover from bad situations by re-compiling, re-booting, or at worst reinstalling. In running, I can lose all the races I want, or come up short in a training session all I want -- so long as I don't put my body at undue risk. The rest of life, the parts that deal with other people, require some care. It's hard to create an environment in which I can screw up my interpersonal relationships just so that I can learn how to get out of the mess. There's a different metaphor for such behavior -- burning bridges -- that connotes its irreversibility. Besides, it's not right to treat people as props in my game. I suppose that this is a place in which role play can help, though artificial situations can go only so far.

Where games, machines, and tools are concerned, though, digging a deep hole just for the challenge of getting out of it can be a powerful way to learn. Pretty soon, you can find yourself as master of the machine.

----

(*) Yikes, how old does that make me sound?


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

June 22, 2007 4:32 PM

XUnit Test Patterns and the Duplex Book

Several folks have already recommended Gerard Meszaros's new book, xUnit Test Patterns. I was fortunate to have a chance to review early drafts of Gerard's pattern language on the web and then at PLoP 2004, where Gerard and I were in a writers' workshop together. By that time I felt I knew a little about writing tests and using JUnit, but reading Gerard's papers that fall taught me just how much more there was for me to learn. I learned a lot that month and can only hope that my participation in the workshop helped Gerard a small fraction as much as his book has helped me. I strongly echo Michael Feathers's recommendation: "XUnit Patterns is a great all around reference." (The same can be said for Michael's book, though my involvement reviewing early versions of it was not nearly as deep.)

As I grow older, I have a growing preference for short books. Maybe I am getting lazy, or maybe I've come to realize that most of the reasons for which I read don't require 400 or 600 hundred words. Gerard's book weighs in at a hefty 883 pages -- what gives? Well, as Martin Fowler writes in his post Duplex Book, XUnit Test Patterns is really more than one book. Martin says two, but I think of it as really three:

  • a 181-page narrative that teaches us about automated tests, how to write them, and how to refactor them,
  • a 91-page catalog of smells you will find in test code, and
  • an approximately 500-page catalog of the patterns of test automation. These patterns reference one another in a tight network, and so might be considered a pattern language.

So in a book like this, I have the best of two worlds: a relatively short, concise, well-written story that shows me the landscape of automated unit testing and gets me started writing tests, plus a complete reference book to which I can turn as I need to learn a particular technique in greater detail. I can read the story straight through and then jump into and out of the catalogs as needed. The only downside is the actual weight of the book... It's no pocket reference! But that's a price I am happy to pay.

One of my longstanding goals has been to write an introductory programming textbook, say for CS1, in the duplex style. I'm thinking something like the dual The Timeless Way of Building/A Pattern Language, only shorter and less mystical. I had always hoped to be the first to do this, to demonstrate what I think is a better future for instructional books. But at this increasingly late date, I'd be happy if anyone could succeed with the idea.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

June 20, 2007 1:20 PM

More Dissatisfaction with Math and Science Education

Another coincidence in time... The day after I post a note on Alan Kay's thoughts on teaching math and science to kids, I run across (via physics blogger and fellow basketball crazy Chad Orzel) Sean Carroll's lament about a particularly striking example of what Kay wants to avoid.

Carroll's article points one step further to his source, Eli Lansey's The sad state of science education, which describes a physics club's visit to a local elementary school to do cool demos. The fifth graders loved the demos and were curious and engaged; the sixth graders were disinterested and going through the motions of school. From his one data point, Carroll and Lansey hypothesize that there might be a connection between this bit flip and what passed for science instruction at the school. Be sure to visit Lansey's article if only to see the pictures of the posters these kids made showing their "scientific procedure" on a particular project. It's really sad, and it goes on in schools everywhere. I've seen similar examples in our local schools, and I've also noticed this odd change in stance toward science -- and loss in curiosity -- that seems to happen to students around fifth or sixth grade. Especially among the girls in my daughters' classes. (My older daughter seemed to go through a similar transition about that time but also seems to have rediscovered her interest in the last year as an eighth grader. My hope abounds...)

Let's hope that the students' loss of interest isn't the result of some unavoidable developmental process and does follow primarily from non-science or anti-science educational practices. If it's the latter, then the sort of things that Alan Kay's group are doing can help.

I haven't written about it here yet, but Iowa's public universities have been charged by the state Board of Regents with making a fundamental change in how we teach science and math in the K-12 school system. My university, which is the home of the state's primary education college, is leading the charge, in collaboration with our bigger R-1 sisters. I'll write more later as the project develops, but for now I can point you to web page that outlines the initiative. Education reform is often sought, often started, and rarely consummated to anyone's satisfaction. We hope that this can be different. I'd feel a lot more confident if these folks would take work like Kay's as its starting point. I fear that too much business-as-usual will doom this exercise.

As I type this, I realize that I will have to get more involved if I want what computer scientists are doing to have any chance of being in the conversation. More to do, but a good use of time and energy.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

June 19, 2007 3:46 PM

Teaching Science and Math to Young Children

After commenting on Alan Kay's thesis, I decided to read a more recent paper by Alan that was already in my stack, Thoughts About Teaching Science and Mathematics To Young Children. This paper is pretty informal, written in a conversational voice and marked by occasional typos. It some ways, it felt like a long blog entry, in which Kay could speak to a larger audience about some of the ideas that motivate his current work at the Viewpoints Research Institute. It's short -- barely more than four pages -- so you should read it yourself, but I'll share a few thoughts that came to mind as I read this morning in between bouts of advising incoming CS freshmen.

Kay describes one of the key challenges to teaching children to become scientists: we must help students to distinguish between empiricism and modeling on one hand and belief- based acceptance of dogma on the other. This is difficult for at least three reasons:

  • Our schools don't usually do this well right now, even when they use "inquiry"-based methods.

  • Our culture is "an admixture of many different approaches to the world and how it works and can be manipulated", with scientific thinking in the background -- even for most of our teachers.

  • Children think differently from adults, being capable of understanding rather complex concepts when they are encountered in a way that matches their world model.

The last of these is a problem because most of us don't understand very well how children think, and most of us are prone to organize instruction in a way that conforms with how we think. As a parent who has watched one daughter pass through middle school and who has another just entering, I have seen children grok some ideas much better than older students when the children have an opportunity engage the concepts in a fortuitous way. I wish that I had gleaned from my experience some ideas that would enable me to create just the right opportunities for children to learn, but I'm still in the hit-or-miss phase.

This brings out a second-order effect of understanding how children think, which Kay points out: "the younger the children, the more adept need to be their mentors (and the opposite is more often the case)". To help someone learn to think and act like a scientist, it is at least valuable and more likely essential for the teacher (to be able) to think and act like a scientist. Sadly, this is all to rare among elementary-school and even middle-school teachers.

I also see this issue operating at the level of university CS education. Being a good CS1 teacher requires both knowing a lot about how students' minds work and being an active computer scientist (or software developer). Whatever drawbacks you may find in a university system that emphasizes research even for teaching faculty, I think that this phenomenon speaks to the value of the teacher-scholar. And by "scholar", I mean someone who is actively engaged doing the discipline, but the fluffy smokescreen that the term sometimes signifies for faculty who have decided to "focus on their teaching".

For Kay, it is essential that children encounter "real science". He uses the phrase "above the threshold" to emphasize that what students do must be authentic, and not circumscribed in a way that cripples asking questions and building testable models. At the end of this paper, he singles out for criticism Interactive Physics and SimCity:

Both of these packages have won many "educational awards" from the pop culture, but in many ways they are anti-real-education because they miss what modern knowledge and thinking and epistemology are all about. This is why being "above threshold" and really understanding what this means is the deep key to making modern curricula and computer environments that will really help children lift themselves.

I found particularly useful Kay's summary of Papert's seminal contribution to this enterprise and of his own contribution. Papert combined an understanding of science and math "with important insights of Piaget to realize that children could learn certain kinds of powerful math quite readily, whereas other forms of mathematics would be quite difficult." In particular, Papert showed that children could understand in a powerful way the differential geometry of vectors and that the computer could play an essential role in abetting this understanding by doing the integral calculus that is beyond them -- and which performance is not necessary for the first-order understanding of the science. Kay claims himself to have made only two small contributions:

  • that multiple, independent, programmable objects can serve as a suitable foundation for children to build scientific models, and
  • that the modeling environment and programming language children use are a user interface that must be designed carefully in order to implement Papert's insight.

What must the design of these tools be like? It must hide gratuitous complexity while exposing essential complexity, doing "the best job possible to make all difficulties be important ones whose overcoming is the whole point of the educational process". Learning involves overcoming difficulties, but we want learners to overcome difficulties that matter, not defects in the tools or pedagogy that we design for them. This is a common theme in the never-ending discussion of which language to use to teach CS majors to write programs -- if, say, C introduces too many unnecessary or inconsistent difficulties, should we use it to teach people to program? Certainly not children, would say Kay, and he says the same thing about most of the languages we use in our universities. Unfortunately, the set of languages that are usually part of the CS1 discussion don't really differ in ways that matter... we are discussing something that matters a lot but not in a way that matters at all.

Getting the environment and language right do matter, because students who encounter unnecessary difficulties will usually blame themselves for their failure, and even when they don't they are turned off to the discipline. Kay says it this way:

In programming language design in a UI, especially for beginners, this is especially crucial.... Many users will interpret [failure] as "I am stupid and can't do this" rather than the more correct "The UI and language designers are stupid and they can't do this".

This echoes a piece of advice by Paul Graham from an entirely different context, described here recently: "So when you get rejected by investors, don't think 'we suck,' but instead ask 'do we suck?' Rejection is a question, not an answer." Questions, not answers.

Kay spends some time talking about how language design can provide the right sort of scaffolding for learning. As students learn, we need to be able to open up the black boxes that are primitive processes and primitive language constructs in their learning to expose a new level of learning that is continuous with the previous. As Kay once wrote elsewhere, one of the beautiful things about how children learn natural language is that the language learned by two-year-olds and elementary school students is fundamentally the same language used by our great authors. The language children use to teach science and math, and the language they use to build their models, should have the same feature.

But designing these languages is a challenge, because we have to strike a balance between matching how learners think and providing avenues to greater expressiveness:

Finding the balance between these is critical, because it governs how much brain is left to the learner to think about content rather than form. And for most learners, it is the initial experiences that make the difference for whether they want to dive in or try to avoid future encounters.

Kay is writing about children, but he could just as well be describing the problem we face at the university level.

Of course, we may well have been handicapped by an education system that has already lost most students to the sciences by teaching math and science as rules and routine and dogma not to be questioned. That is ultimately what drives Kay and his team to discover something better.

If you enjoy this paper -- and there is more there than I've discussed here, including a neat paragraph on how children understand variables and parameters -- check out some more of his team's recent work on VPRI's Writings page.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 16, 2007 3:34 PM

Two Things, Computing and Otherwise

My recent article on Alan Kay's thesis incidentally intersected with one of those blog themes (er, memes) that make the rounds. Kay brought out two essential concepts of computing: syntax and abstraction. Abstraction and the distinction between syntax and semantics are certainly two of the most important concepts in computing.

Charles Miller takes a shot at the identifying The Two Things about Computer Programming:

  1. Every problem can be solved by breaking it up into a series of smaller problems.
  2. The computer will always do exactly what you tell it to.

That first one, decomposition, is closely related to abstraction.

When I followed the link to the source of the The Two Things phenomenon, I found that my favorites were not about computers or science but from the humanities, history to be precise. These are attributed to Jonathan Dresner. Here are Dresner's The Two Things about History:

  1. Everything has antecedents.
  2. Sources lie, but they're all we have.

Excellent! Of course, these apply to the empirical side of science, too, and even to the empirical side of understanding large software systems. Consider #1. That Big Ball of Mud we are stuck with has antecedents, and understanding the forces that lead to such systems is important both if we want to understand the architectures of real systems and if we seek a better way to design. All patterns we notice have their antecedents, and we need to understand them. As for #2, if we changed 'sources' to 'source', most programmers would nod knowingly. Source code often lies -- hides its true intentions, masks the program's larger structure, misleads us with unnecessary complexity of embellishment. Even when we do our best to make it speak truth, code can sometimes lie.

As a CS instructor, I also liked His The Two Things about Teaching History:

  1. A good story is all they'll remember, not the half hour of analysis on either side of it.
  2. They think it's about answers, but it's really about questions.

This pair really nails what it's like to teach in any academic discipline. I've already written about the first in All About Stories. As to the second, helping students make the transition from answers to questions -- not turning away from seeking answers, but turning one's focus to asking questions -- is one of the goals of education. By the time students reach the university these days, the challenge seems to have grown, because they have grown up in a system that focuses on answers, implicitly even when not explicitly.

I'm not sure any of the entries on computing at the The Two Things site nail our discipline as well the two things about history above. It seems like a fun little exercise to keep thinking on what I'd say if asked the question...


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 12, 2007 1:38 PM

Trying to Learn from All Critics

Without having comments enabled on my blog, I miss out on most of the feedback that readers might like to give. It seems like a bigger deal to send an e-mail message with comments. Fortunately for me, a few readers go out of their way to send me comments. Unfortunately for the rest of my readers, those comments don't make it back into the blog the way on-line comments do, and so we all miss out on the sort of conversation that a blog can generate. I think it's time to upgrade my blogging software, I think...

Alistair Cockburn recently sent a comment on my entry But Raise Your Hand First that I must share (with Alistair's permission, of course):

Contrary to Weinberg, I use the exact opposite evaluation of a critic's comments: I assume that anybody, however naive and unschooled, has a valid opinion. No matter what they say, how outrageous, how seemingly ill-founded, someone thought it true, and therefore it is my job to examine it from every presupposition, to discover how to improve the <whatever it is>. I couldn't imagine reducing valid criticism to only those who have what I choose to call "credentials". Just among other things, the <whatever it is> improves a lot faster using my test for validity.

This raises an important point. I suspect that Weinberg developed his advice while thinking about one's inner critics, that four-year-old inside our heads. When he expressed it as applying to outer critics, he may well still have been in the mode of protecting the writer from prior censorship. But that's not what he said.

I agree with Alistair's idea that we should be open to learning from everyone, which was part of the reason I suggested that students not use this as an opportunity to dismiss critique from professors. When students are receiving more criticism than they are used to, it's too easy to fall into the trap of blaming the messenger rather than considering how to improve. I think that most of us, in most situations, are much better served by adopting the stance, "What can I learn from this?" Alistair said it better.

But in the border cases I think that Alistair's position places a heavy and probably unreasonable burden on the writer: "... my job to examine it from every presupposition, to discover how to improve the <whatever it is>." That is a big order. Some criticism is ill-founded, or given with ill will. When it is, the writer is better off to turn her attention to more constructive pursuits. The goal is to make the work better and to become a better writer. Critics who don't start in good faith or who lie too far from the target audience in level of understanding may not be able to help much.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

June 07, 2007 10:14 AM

The Testing Effect

In the current issue of the Chronicle of Higher Education, the article You Will be Tested on This (behind pay wall) tells us something researchers have known for decades but which too few teachers act on: People learn better when they are required to actively recall and use knowledge soon after they learn it.

This idea was first documented by Herbert Spitzer, who did a study in the late 1930s with Iowa sixth-graders. Students who were quizzed about a reading assignment within twenty-four hours of their first reading the article scored much better than students who had been quizzed later or not at all. The results did not follow from different study habits or from extra preparation, as "students did not know when they would be quizzed, and they did not keep the article, so they had no chance to study on their own.

This has come to be called the Testing Effect. Spitzer concluded:

"Immediate recall in the form of a test is an effective method of aiding the retention of learning and should, therefore, be employed more frequently in the elementary school."

As the Chronicle piece points out, the Testing Effect runs counter to conventional wisdom:

"The testing effect cuts against the lay understanding of memory," says Jeffrey D. Karpicke, who recently completed a doctorate at Washington University and will become an assistant professor of psychology at Purdue University this fall. "People usually imagine memory as a storage space, as a space where we put things, as if they were books in a library. But the act of retrieval is not neutral. It affects the system."

This is another case where we rely on a metaphor beyond its range of applicability. Knowing where it fails and why can help us do our job better. In the case of human memory, instructors can help students improve their learning simply by giving a quiz promptly after teaching a new idea. Giving feedback promptly is even better, because it allows students to correct misconceptions before they become too firmly implanted.

Note that the Testing Effect does not gets its benefit from getting students to do more or different studying:

The purpose of this quizzing is not to motivate students to pay attention and to study more; if those things happen, the researchers say, they are nice side effects. The real point is that quizzing, if done correctly, is a uniquely powerful method for implanting facts in students' memory.

The value of prompt quizzing isn't from students studying for the quiz. It is from the act of taking the quiz itself, making an effort to retrieve items from memory. As a psychology professor from Washington University in St. Louis is quoted in the article, "every time you test someone, you change what they know."

There are a lot of open questions about how the Testing Effect works and the conditions under which it is maximized, such as the role of feedback, immediate or otherwise. One of the major objections raise by some university professors I know is that such frequent, short-answer testing favors the memorization of isolated facts at the expense of broader conceptual learning. Current research is trying to answer some of these questions.

Many professors also balk at the idea of writing and grading all these quizzes. There are technological solutions to part of this problem. Many folks use Blackboard to give and grade simple quizzes. For writing code, we might try something like Nick Parlante's JavaBat.com tool. Because the Testing Effect does not depend on motivating students to study more, I don't think that grading the quizzes is all that important. The key is simply to get the students to do active recall and retrieval.

My teaching may already benefit from the Testing Effect. I do not give quizzes, but I do begin nearly every class session with an Opening Exercise that asks students to use some ideas we learned the previous session to solve a problem. In courses that teach programming, these exercises almost always involve writing code. In an algorithms or compiler course, the exercise might be a more general problem to solve. But in all cases the exercises require students to produce something, not select true/false or a multiple-choice answer. After students have had time to work on the problem, we debrief answers and discuss possibilities. This is the sort of immediate feedback that seems valuable to learners -- and which I have a hard time providing when I have to grade items. Later in the period, I may ask students to solve another exercise as well. Doing things always seems like a better idea than listening to me yammer on for 75 uninterrupted minutes.

I have only once concern that my approach doesn't deliver the Testing Effect. Because I don't grade the quiz, I fear that some students choose not to exert much effort -- and effort is the key! I'm not so concerned that I myself am yet motivated to collect and grade the exercises, but maybe I should be. One thing that doesn't concern me is memorization of isolated facts at the expense of broader conceptual learning. My exercises ask student use the knowledge, not parrot it back, and good exercises cause students to integrate new knowledge into their larger understanding.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 30, 2007 4:27 PM

But Raise Your Hand First

Weinberg on assessing the value of a critic's comments:

Here's an excellent test to perform before listening to any critic, inside or outside:

What have they written that shows they have the credentials to justify the worth of their criticism?

This test excludes most high-school and college teachers of English, most of your friends, lots of editors and agents, and your mother.

It also excludes your [inner] four-year-old, who's never written anything.

Computer science students should show due respect to their professors (please!), but they might take this advice to heart when deciding how deeply to take criticism of their writing -- their programs. Your goal in a course is to learn something, and the professor's job is to help you. But ultimately you are responsible for what you learn, and it's important to realize that the prof's evaluation is just one -- often very good -- source. Listen, try to decide what is most valuable, learn, and move on. You'll start to develop your own tastes and skills that are independent of your the instructors criticism.

Weinberg's advice is more specific. If the critic has never written anything that justifies the worth of their criticism, then the criticism may not be all that valuable. I've written before about the relevance of a CS Ph.D. to teaching software development. Most CS professors have written a fair amount of code in their days, and some have substantial industry experience. A few continue to program whenever they can. But frankly some CS profs don't write much code in their current jobs, and a small group have never written any substantial program. As sad as it is for me to say, those are the folks whose criticism you sometimes simply have to take with a grain of salt when you are learning from them.

The problem for students is that they are not ideally situated to decide whose criticism is worth acting on. Looking for evidence is a good first step. Students are also not ideally situated to evaluate the quality of the evidence, so some caution is in order.

Weinberg's advice reminds me of something Brian Marick said, on a panel at the OOPSLA'05 Educators' Symposium. He suggested that no one be allowed to teach university computer science (or was it software development?) unless said person had been a significant contributor to an open-source software project. I think his motivation is similar to what Weinberg suggests, only broader. Not only should we consider someone's experience when assessing the value of that person's criticism, we should also consider the person's experience when assessing the value of what they are able to teach us.

Of course, you should temper this advice as well with a little caution. Even when you don't have handy evidence, that doesn't mean the evidence doesn't exist. Even if no evidence exists, that doesn't mean you have nothing to learn from the person. The most productive learners find ways to learn whatever their circumstances. Don't close the door on a chance to learn just because of some good advice.

So, I've managed to bring earlier threads together involving Brian Marick and Gerald Weinberg, with a passing reference to Elvis Costello to boot. That will have to do for closure. (It may also boost Brian's ego a bit.)


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 30, 2007 7:01 AM

Weinberg on Writing

Whenever asked to recommend "must read" books, especially on computing, I always end up listing at least one book by Gerald Weinberg -- usually his The Psychology of Computer Programming. He has written a number of other classic books, on topics ranging from problem solving and consulting to teamwork and leadership. Now in a new stage of his career, Weinberg has moved from technical consulting to more general writing, including science fiction novels. He's also blogging, both on writing and on consulting.

I feel a connection to his blogs these days because they match a theme in my own reading and writing lately: telling stories as a way to teach. Even when Weinberg was writing his great non-fiction books -- The Psychology of Computer Programming, of course, but also An Introduction to General Systems Thinking, The Secrets of Consulting, and Becoming a Technical Leader -- he was telling stories. He claims that didn't realize that right away (emphasis added):

I'd like to say that I immediately recognized that reading fiction is another kind of simulation, but I'm not that insightful. Only gradually did I come to realize that a great deal of the popularity of my non-fiction books (and the books of a few others, like Tom DeMarco) is in the stories. They make for lighter reading, and some people object to them, but overall, those of us who use stories manage to communicate lots of hard stuff. Why? Because a good story takes the reader into a trance where s/he can "experience" events just as they can in a teaching simulation.

One of my favorite undergraduate textbooks was DeMarco's Structured Analysis and System Specification, and one of the reasons I liked it so was that it was a great book to read: no wasted words, no flashy graphics, just a well told technical story with simple, incisive drawings. Like Weinberg, I'm not sure I appreciated why I liked the book so much then, but when I kept wanting to re-read it in later years I knew that there was something different going on.

But "just" telling stories is different from teaching in an important way. Fiction and creative writers are usually told not to "have a point". Having one generally leads to stories that seem trite or forced. A story with a point can feel like a bludgeon to the reader's sensibility. A point can come out of a good story -- indeed I think that this is unavoidable with the best stories and the best story-tellers -- but it should rarely be put in.

Teachers differ from other story tellers in this regard. They are telling stories precisely because they have a point. Usually, there is something specific that we want others to learn!

(This isn't always true. For example, when I teach upper-division project courses, I want students to learn how to design big systems. In those courses, much of what students learn isn't specific content but habits of thought. For this purpose, "stories without a point" are important, because they leave the learner more freedom to make sense of their own experiences.)

But most of the time, teachers do have a point to make. How should the teacher as story-teller deal with this difference? Weinberg faces it, too, because even with his fiction, he is writing to teach. Here is what he says:

"If you want to send a message, go to Western Union." ...

It was good advice for novelists, script writers, children's writers, and actors, but not for me. My whole purpose in writing is to send messages.... I would have to take this advice as a caution, rather than a prohibition. I would have to make my messages interesting, embedding them in compelling incidents that would be worth reading even if you didn't care about the messages they contained.

For teachers, I think that the key to the effective story is context: placing the point to be learned into a web of ideas that the student understands. A good story helps the student see why the idea matters and why the student should change how she thinks or behaves. In effect, the teacher plays the role of a motivational speaker, but not the cheerleading, rah-rah sort. Students know when they are being manipulated. They appreciate authenticity even in their stories.

Weinberg's blogs make for light but valuable reading. Having learned so much from his books over the years, I enjoy following his thinking in this conversational medium, and I find myself still learning.

But, in the end, why tell stories at all? I believe the Hopi deserve the last word:

"The one who tells the stories rules the world."

Well, at least they have a better chance of reaching their students, and maybe improving their student evaluations.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 23, 2007 8:03 AM

The Strange and the Familiar

Artists and other creative types often define their artistic endeavor obliquely as "to make the familiar strange, and the strange familiar". I've seen this phrase attributed to the German poet Novalis, who coined it as "the essence of romanticism". You may have seen me use half of the phrase in my entry on a recent talk by Roy Behrens.

Recently, I began to wonder... Is this what teaching is!?

In a sense, the second half of the definition is indeed one of the teacher's goals: to help students understand ideas and use techniques that are, at the beginning of a course, new or poorly understood. The strange becomes familiar when it becomes a part of how we understand and think about about our worlds.

But I think the first part of the definition -- to make the familiar strange -- is important, too, sometimes more important. Often the greatest learning occurs when we confront an idea that we think we understand, which seems to hold nothing new for us, which seems almost old, and are led beneath the surface to a wrinkle we never new existed. Or when we are led to where the idea intersects with another in a way we never considered before and find that the old idea opens new doors. We find that our old understanding was incomplete at best and wrong at worst.

Many of the courses I am fortunate enough to teach on are replete with opportunities both to make the strange familiar and to make the familiar strange. Programming Languages and Algorithms are two. So are Object-Oriented Programming and Artificial Intelligence. Frankly, so, too, is any course that we approach with open hearts and minds.

Teachers do what artists do. They just work in a different medium.

(A little googling finds that Alistair Cockburn wrote on this phrase last year. There is so much to read and know!)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 22, 2007 3:55 PM

Someone Competent to Write Code

Students sometimes say to me:

I don't have to be good at <fill in the blank>. I'll be working on a team.

The implication is that the student can be good enough at one something and thus not have to get good at some other part of the discipline. Usually the skill they want to depend is a softer skill, such as communication or analysis, The skill they want to avoid mastering is something they find harder, usually a technical skill and -- all too often -- programming.

First, let me stipulate that not everyone master every part of computer science of software development. But this attitude usually makes some big assumptions about whether a company should want to entrust systems analysis or even "just" interacting with clients. I always tell students that many people probably won't want them on their teams if they aren't at least competent at all phases of the job. You don't have to great at <fill in the blank>, or everything, but you do have to be competent.

I was reminded of this idea, which I've talked about here at least once before when I ran across Brian Marick quoting an unnamed programmer:

What should teams do with the time they're not spending going too fast? They should invest in one of the four values I want to talk about: skill. As someone who wants to remain anonymous said to me:

I've also been tired for years of software people who seem embarrassed to admit that, at some point in the proceedings, someone competent has to write some damn code.

He's a programmer.

This doesn't preclude specialization. Maybe each team has one someone competent has to write some damn code. But most programmers who have been in the trenches are skeptical of working with teammates who don't understand what it's like to deliver code. Those teammates can make promises to customers that can't be met, and "design system architectures" that are goofy at best and unimplementable at worst.

One of the things I like about the agile methods is that they try to keep all of the phases of software development in focus at once, on roughly an even keel. That's not how some people paint Agile when they talk it down, but it is one of the critical features I've always appreciated. And I like how everyone is supposed to be able to contribute in all phases -- not as masters, necessatily, but as competent members of the team.

This is one of the ideas that Brian addresses in the linked-to article, which talks about the challenge facing proponents of so-called agile software development in an age when execution is more important than adoption. As always, Brian writes a compelling story. Read it. And if you aren't already following his blog in its new location, you should be.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 21, 2007 4:45 PM

More on Metaphors for Refactoring

Not enough gets said about the importance of abandoning crap.

-- Ira Glass
, at Presentation Zen

Keith Ray wrote a couple of entries last month on refactoring. The first used the metaphor of technical debt and bankruptcy. The second used the simile of refactoring as like steering a car.

In my experience and that of others I've read, the technical debt metaphor works well with businesspeople. It fits well into their world view and uses the language that they use to understand their business situation. But as I wrote earlier, I don't find that this works all that well with students. They don't live in the same linguistic and experiential framework as businesspeople, and the way people typically perceive risk biases them against being persuaded.

A few years ago Owen Astrachan, Robert Duvall, and I wrote a paper called Bringing Extreme Programming to the Classroom that originally appeared at XP Universe 2001 and was revised for inclusion in Extreme Programming Perspectives. In that paper, we described some of the micro-examples we used at that time to introduce refactoring to novice students. My experience up to then and since has been that students get the idea and love the "a-a-a-ahhh" that comes from a well-timed refactor, but that most students do not adopt the new practice as a matter of course. When they get into the heat of a large project, they either try to design everything up front (and usually guess wrong, of course) or figure they can always make do with whatever design they currently have, whether designed or accreted.

Students simply don't live with most code long enough, even on a semester-long project, to come face-to-face with technical bankruptcy. When they, they declare it and make do. I think in my compilers course this fall I'm going to try to watch for the first opportunity to help one of the student groups regain control of their project through refactoring, perhaps even as a demonstration for the whole class. Maybe that will work better.

That said, I think that Ray's steering wheel analogy may well work better for students than the debt metaphor. Driving is an integral part of most students' lives, and maybe we can connect to their ongoing narratives in this way. But the metaphor will still have to be backed up with a lot of concrete practice that helps them make recognizable progress. So watching for an opportunity to do some macro-level refactoring is still a good idea.

Another spoke in this wheel is helping students adopt the other agile practices that integrate so nicely with refactoring. As Brian Marick said recently in pointing out values missing from the Agile Manifesto,

Maybe the key driver of discipline is the practice of creating working software -- running, tested features -- at frequent intervals. If you're doing that, you just don't have time for sloppiness. You have to execute well.

But discipline -- especially discipline that conflicts with one's natural, internal, subconscious biases -- is hard to sell. In a semester-long course, by the time students realize they really did need that discipline, time is too short to recover properly. They need time to ingrain new practice as habit. For me as instructor, the key is "small", simple practices that people can do without high discipline, perhaps with some external guidance until their new habit forms. Short iterations are something I can implement as an instructor, and with enough attention paid throughout the term and enough effort exerted at just the right moments, I think I can make some headway.

Of course, as Keith reminds us and as we should always remember when trafficking in metaphor: "Like all analogies, there's a nugget of truth here, but don't take the analogy too far."


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 20, 2007 3:14 PM

Good and Bad Use

Recently I wrote about persuasion and teaching, in light of what we know about how humans perceive and react to risk and new information. But isn't marketing inherently evil, in being motivated by the seller's self-interest and not the buyer's, and thus incompatible with a teacher/student relationship? No.

First of all, we can use an idea associated with a "bad" use to achieve something good. Brian Marick points out that the motivating forces of XP are built in large part on peer pressure:

Some of XP's practices help with discipline. Pair programming turns what could be a solitary vice into a social act: you and your pair have to look at each other and acknowledge that you're about to cheat. Peer pressure comes into play, as it does because of collective code ownership. Someone will notice the missing tests someday, and they might know it was your fault.

This isn't unusual. A lot of social organizations provide a former of positive peer pressure to help individuals become better, and to create a group that adds value to the world. Alcoholics Anonymous is an example for people tempted to do something they know will hurt them; groups of runners training for a marathon rely on one another for the push they need to train on days they feel like not and to exert the extra effort they need to improve. Peer pressure isn't a bad thing; it's just depends on who you choose for your peers.

Returning to the marketing world, reader Kevin Greer sent me a short story on something he learned from an IBM sales trainee:

The best sales guy that I ever worked with once told me that when he received sales training from IBM, he was told to make sure that he always repeated the key points six times. I always thought that six times was overkill but I guess IBM must know what they're talking about. A salesman is someone whose income is directly tied to their ability to effectively "educate" their audience.

What we learn here is not anything to do with the salesman's motive, but with the technique. It is grounded in experience. Teachers have heard this advice in a different adage about how to structure a talk: "Tell them what you are about to tell them. Then tell them. Then tell them what you have just told them." Like Kevin, I felt this was overkill when I first heard it, and I still rarely follow the advice. But I do know from experience how valuable it can be me, and in the meantime I've learned that how the brain works makes it almost necessary.

While I'm still not a salesman at heart, I've come to see how "selling" an idea in class isn't a bad idea. Steve Pavlina describes what he calls marketing from your conscience. His point ought not seem radical: "marketing can be done much more effectively when it's fully aligned (i.e., congruent) with one's conscience."

Good teaching is not about delusion but about conscience. It is sad that we are all supposed to believe the cynical interpretation of selling, advertising, and marketing. Even in the tech world we certainly have plenty of salient reasons to be cynical. We've all observed near-religious zealotry in promoting a particular programming language, or a programming style, or a development methodology. When we see folks shamelessly shilling the latest silver bullet as a way to boost their consulting income, they stand out in our minds and give us a bad taste for promotion. (Do you recognize this as a form of the availability heuristic?)

But.

I have to overcome my confirmation bias, other heuristic biases that limit my thinking, and my own self-interest in order to get students and customers to gain the knowledge that will help them; to try new languages, programming styles, and development practices that can improve their lives. What they do with these is up to them, but I have a responsibility to expose them to these ideas, to help them appreciate them, to empower them to make informed choices in their professional (and personal!) lives. I can't control how people will use the new ideas they learn with me, or if they will use them at all, but if help them also to learn how to make informed choices later, then I've done about the best I can do. And not teaching them anything isn't a better alternative.

I became a researcher and scholar because I love knowledge and what it means for people and the world. How could I not want to use my understanding of how people learn and think to help them learn and think better, more satisfyingly?


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

May 16, 2007 3:53 PM

All About Stories

Telling Stories, by Garmash

I find it interesting that part of what I learned again from Schneier's psych of risk paper leads to stories. But biases in how we think, such as availability and framing, make the stories we tell important -- if we want them to reach our audience as intended. Then again, perhaps my direction in this series follows from a bias in my own mind: I had been intending to blog about a confluence of stories about stories for a few weeks.

First, I was sitting in on lectures by untenured and adjunct faculty this semester, doing year-end evaluations. In the middle of one lecture, it occurred to me: The story matters. A good lecture is a cross product of story and facts (or data, or knowledge).

What if a lecture is only good as a story? It is like empty calories from sugar. We feel good for a while, but pretty soon we feel an emptiness. Nothing of value remains.

What if a lecture is only good for its facts? I see this often, and probably do this all too often. Good slides, but no story to make the audience care. The result is no interest. We may gain something, but we don't enjoy it much. And Schneier tells us that we might not even gain that much -- without a story that makes the information available to us, we may well forget it.

Soon after that, I ran across Ira Glass: Tips on storytelling at Presentation Zen. Glass says that the basic building blocks of a good story are the anecdote itself, which raises an implicit question, and moments of reflection. which let the user soak in the meaning.

Soon after that, I was at Iowa State's HCI forum and saw a research poster on the role of narrative in games and other virtual realities. It referred to the Narrative Paradigm of Walter Fisher (unexpected Iowa connection!), which holds that "All meaningful communication is a form of storytelling." And: "People experience and comprehend their lives as a series of ongoing narratives." (emphasis added)

Then, a couple of weeks later, I read the Schneier paper. So maybe I was predisposed to make connections to stories.

Our audiences -- software developers, students, business people -- are all engaged in their own ongoing narratives. How do we connect what we are teaching with one of their narratives? When we communicate Big Ideas, we might even strive to create a new thread for them, a new ongoing narrative that will define parts of their lives. I know that OOP, functional programming, and agile techniques do that for developers and students. The stories we tell help them move in that direction.

Some faculty seem to make connections, or create new threads. Some "just" lecture. Others do something more interactive. These are the faculty whom students want to take for class.

Others don't seem to make the connections. The good news for them -- for me -- is that one can learn how to tell stories better. The first step is simply to be aware that I am telling a story, and so to seek the hook that will make an idea memorable, worthwhile, and valuable. A lot of the motivation lies with the audience, but I have to hold up my end of the bargain.

Not just any story will do. People want a story that helps them do something. I usually know when my story isn't hitting the mark; if not before telling it, then after. The remedy usually lies in one of two directions: finding a story that is about my audience and not (just) me, or making my story real by living it first. Real problems and real solutions mean more than an concocted stories about abstract ideas of what might be.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 14, 2007 7:25 PM

Persuasion, Teaching, and New Practice

I have written three posts recently [ 1 | 2 | 3 ] on various applications of Bruce Schneier's The Psychology of Security to software development and student learning. Here's another quote:

The moral here is that people will be persuaded more by a vivid, personal story than they will by bland statistics and facts, possibly solely due to the fact that they remember vivid arguments better.

I think that this is something that many of us know intuitively from experience both as learners and as teachers. But the psychological evidence that Schneier cites give us all the more reason to think carefully about many of the things we do. Consider how it applies to...

... "selling" agile ideas, to encouraging adoption among the developers who make software. The business people who make decisions about the making of software. The students who learn how to make software from us.

... "marketing" CS as a rewarding, worthwhile, challenging major and career path.

... "framing" ideas and practices for students whom we hope to help grow in some discipline.

Each of these audiences responds to vivid examples, but the examples that persuade best will be stories that speak to the particular desires and fears of each. Telling personal stories -- stories from our own personal experiences -- seem especially persuasive, because they seem more real to the listener. The listener probably hasn't had the experience we relate, but real stories have a depth to them that often can't be faked.

I think my best blogging fits this bill.

As noted in one of the earlier entries, prospect theory tells us that "People make very different trades-offs if something is presented as a gain than if something is presented as a loss." I think this suggests that we must frame arguments carefully and explicitly if we hope to maximize our effect. How can we frame stories most beneficially for student learning? Or to maximize the chance that developers adopt, or clients accept, new agile practices?

I put the words "selling", "marketing", and "framing" in scare quotes above for a reason. These are words that often cause academics great pause, or even lead academics to dismiss an idea as intellectually dishonest. But that attitude seems counter to what we know about how the human brain works. We can use this knowledge positively -- use our newfound powers for good -- or negatively -- for evil. It is our choice.

Schneier began his research with the hope of using what he learned for good, to help humans understand their own behavior better and so to overcome their default behavior. But he soon learned that this isn't all that likely; our risk-intuitive behaviors are automatic, too deeply-ingrained. Instead he hopes to pursue a middle road -- bringing our feelings of security into congruence with the reality of security. (For example, he now admits that palliatives -- measures that make users feel better without actually improving their security -- may be acceptable, if they result in closer congruence between feeling and reality.)

This all reminded me of Kathy Sierra's entry What Marketers Could Do For Teachers. There, she spoke the academically incorrect notion that teachers could learn from marketers because they:

  • "know what turns the brain on"
  • "know how to motivate someone almost instantly"
  • "know how to get--and keep--attention"
  • "spend piles of money on improving retention and recall"
  • "know how to manipulate someone's thoughts and feelings about a topic"

"Manipulate someone's thoughts and feelings about a topic." Sounds evil, or at least laden with evil potential. Sierra acknowledges the concern right up front...

[Yes, I'm aware how horrifying this notion sound -- that we take teachers and make them as evil as marketers? Take a breath. You know that's not what I'm advocating, so keep reading.]

Kathy meant just what Schneier is trying to do, that we can learn not the motivations of marketing but their understanding of the human mind, the human behavior that makes possible the practices of marketing. Our motivations are already good enough.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 11, 2007 9:29 AM

Fish is Fish

Yesterday's post on the time value of study reminded me a bit of Aesop's fable The Ant and the Grasshopper. So perhaps you'll not be surprised to read a post here about a children's book.

cover image of Fish is Fish

While at workshop a couple of weeks ago, I had the pleasure of visiting my Duke friends and colleagues Robert Duvall and Owen Astrachan. In Owen's office was a copy of the book Fish Is Fish, by well-known children's book author Leo Lionni. Owen and Robert recommended the simple message of this picture book, and you know me, so... When I got back to town, I checked it out.

The book's web site summarizes the book as:

A tadpole and a minnow are underwater friends, but the tadpole grows legs and explores the world beyond the pond and then returns to tell his fish friend about the new creatures he sees. The fish imagines these creatures as bird-fish and people-fish and cow-fish and is eager to join them.

The story probably reaches young children in many ways, but the first impression it left on me was, "You can't imagine what you can't experience." Then I realized that this was both an overstatement of the story and probably wrong, so I now phrase my impression as, "How we imagine the the rest of the world is strongly limited by who we are and the world in which we live." And this theme matters to grown-ups as much as children.

Consider the programmer who knows C or Java really well, but only those languages. He is then asked to learn functional programming in, say, Scheme. His instructor describes higher-order procedures and currying, accumulator passing and tail-recursion elimination, continuations and call/cc. The programmer sees all these ideas in his mind's eye as C or Java constructs, strange beasts with legs and fins.

Or consider the developer steeped in the virtues and practices of traditional software engineering. She is then asked to "go agile", to use test-first development and refactoring browsers, pair programming and continuous integration, the planning game and YAGNI. The developer is aghast, seeing these practices in her mind's eye from the perspective of traditional software development, horrific beasts with index cards and lack of discipline.

When we encounter ideas that are really new to us, they seem so... foreign. We imagine them in our own jargon, our own programming language, our own development style. They look funny, and feel unnecessary or clunky or uncomfortable or wrong.

But they're just different.

Unlike the little fish in Lionni's story, we can climb out of the water that is our world and on to the beach of a new world. We can step outside of our experiences with C or procedural programming or traditional software engineering and learn to live in a different world. Smalltalk. Scheme. Ruby. Or Erlang, which seems to have a lot of buzz these days. If we are willing to do the work necessary to learn something new, we don't flounder in a foreign land; we make our universe bigger.

Computing fish don't have to be (just) fish.

----

(Ten years ago, I would have used OOP and Java as the strange new world. OO is mainstream now, but -- so sadly -- I'm not sure that real OO isn't still a strange new world to most developers.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

May 10, 2007 4:04 PM

Internalization as Investment

Maybe I am making too much of this and this. Reader Chris Turner wrote to comment that not internalizing is natural:

The things that I internalize are things I personally use. ... "Use" != "Have to know for a quiz". The reason I internalize them is because it's faster to remember than to look it up. If I hardly ever use it, though, the time spent learning it is wasted time. YAGNI as applied to knowledge, I suppose. ... [E]specially in the software development field, that this is not only acceptable, but encouraged. I simply don't have enough time to learn all I can about every design pattern out there. I have, however, internalized several that have been useful to me in particular.

I agree that one will -- and needs to -- internalize only what one uses on a regular basis. So as an instructor I need to be sure to give students opportunities to use the ideas that I hope for them to internalize. However, I am often disappointed that, even in the face of these opportunities, students seem to choose other activities (or no programming activities at all) and thus miss the chance to internalize an important idea. I guess I'm relying on the notion that students can trust that the ideas I choose for them are worth learning. People who bothered to master a theoretical construct like call/cc were able to create the idea of a continuation-based web server, rather than having to wait to be a third-generation adopter of a technology created, pre-processed, and commoditized by others.

But there's more. As one of my colleagues said yesterday, part of becoming educated is learning how to internalize, through conscious work. Perhaps we need to do a better job helping students to understand that this is one of the goals we have for them.

This leads me to think about another human bias documented in Schneier's article psychology article. I think that student behavior is also predicted by time discounting, the practice of valuing a unit of resource today more than the same unit of resource at a future date. In the financial world, this makes great sense, because a dollar today can be invested and earn interest, thus becoming more than $1 at the future date. The choice we face in valuing future resources is in predicting the interest rate.

I think that many of us, especially when we are first learning to learn, underestimate the "interest rate", the rate of return on time spent studying and learning-by-programming. Investment in a course early in the semester, especially when learning a new language or programming style, is worth much more than an equivalent amount of time spent later preparing for assignments and exams. And just as in the financial world, investing more time later often cannot make up for the ground lost to the early, patient investor. A particularly self-aware student once told me that he had used what seemed to be the easy first few weeks of my Programming Languages course to dive deep into Scheme and functional programming on his own. Later, when the course material got tougher, he was ready. Other students weren't so lucky, as they were still getting comfortable with syntax and idiom when they had to confront new content such as lexical addressing.

I like the prospect of thinking about the choice between internalizing and relying on external cues in terms innate biases on risk and uncertainty. This seems to give me a concrete way to state and respond to the issue -- to make changes in how I present my classes that can help students by recognizing their subconscious tendencies.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 10, 2007 11:28 AM

Student Learning as Confronting Risk

Last time, I wrote about how some ideas on human psychology, from Bruce Schneier's The Psychology of Security paper. Part way through, Schneier jokes:

(If you read enough of these studies, you'll quickly notice two things. One, college students are the most common test subject. And two, any necessary props are most commonly purchased from a college bookstore.)

University psychology researchers are as lazy as university computer scientists, I guess.

Schneier doesn't mention a question that seems obvious to me: Does this common test audience create a bias in the results produced? If college students are not representative, then the results from these studies may not tell us much about other kinds of peoples' behaviors! In many ways, college students are not representative of the rest of the world. They are at a nexus in development, different from teenagers at home but typically not yet living under the same set of constraints as people in the working world.

But I'm not too worried. Enough other studies on risk and probabilistic reasoning have been done with adult subjects, and they give similar results. That isn't too surprising, because what we are testing here doesn't involve reflective choices that are conditioned by development or culture, but rather reactions to conditions. These reactions are largely reflexive, under the control of the amygdala, "a very primitive part of the brain that ... sits right above the brainstem.... [It] is responsible for processing base emotions that come from sensory inputs, like anger, avoidance, defensiveness, and fear."

But overthinking Schneier's joke got me to thinking something else: how do these ideas apply to students in their own world, where they score points for grades in a course, make choices about what they need to know, and incur the costs of studying something now or later?

Prospect theory tells us that people prefer sure gains to potential gains, and potential losses to sure losses. I often observe students exhibit two behaviors consistent with these biases:

  • When the choice is between using a concept, technique, or language construct that is already understand and using a new idea that will require some work but which offers potential long-term benefits, most students opt for the sure gain.

  • Like the refactoring example from last time, when the choice is between taking the hit now to clean up a design or program or taking the chance on the current version, with a potentially bigger loss, most students opt for the potential loss.

There may be simpler emotional explanations for these behaviors, but I am thinking about them in a way way in light of Schneier's article.

Like software developers in general, students certainly fall victim to optimism bias. I've always figured that, when students gamble on getting more work done than they reasonably can in a short period, they were reacting to a world of scarce resources, especially time. But now I see that whatever conscious choice they make in this regard is reinforced or even precipitated by a primitive bias toward optimism. Their world of scarce resources is in many ways much like the conditions under which this bias evolved. Further, this bias is probably reinforced by the fact that college CS students are just the sort who have been successful at playing the game this way for many years, and who have avoided the train wrecks that plagued lesser students in high school or in less challenging majors. It must be a shock to have a well-trained optimism bias and then run into something like call/cc. Suddenly, the glass isn't half full after all.

A few posts back. I wrote about reliance on external references. For students, I think that this turns out to be dangerous for another reason, related to another tendency that Schneier documents: the availability heuristic. This refers to the tendency that humans have to "assess the frequency of a class or the probability of an event by the ease with which instances ... can be brought to mind". People are overly influenced by vivid, memorable instances. When they have encountered only a few instances in the first place, I think they are also overly influenced by the instances in that small set. An instructor can go to great lengths to expose students to representative exemplars, but that small set will also have the potential to mislead when relied on too heavily.

Relying on external references to recall syntax is one thing; it will usually work out just fine, even if its is unacceptably slow, especially in the context of an exam. But relying on triggers for more general problem solving can create problems all its own... The most vivid, most memorable, or only instances you've seen will bias your results. I am a strong proponent of reasonable from examples, a lá case-based reasoning, but this requires a disciplined use of a reliable "similarity metric". Students often don't have a reliable enough similarity metric in hand, and they often haven't learned yet to use it in a disciplined way. They tend to select the past example that they remember -- or understand!! -- the best, regardless of how well it applies in the current context. The result is often a not-so-good solution and a disillusioned student.

Thinking these thoughts will help me teach better.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 05, 2007 9:46 PM

Students Scoring Points

Jimmy McGinty: You know what separates the winners from the losers?
Shane Falco: The score.

-- from The Replacements

Shane Falco and Jimmy McGinty in 'The Replacements'

Seeing one of my favorite sports-cliché films again, this time in French (welcome to Montreal!) prompts me to continue my recent series of posts on student attitudes with an entry I've had in my "to write" queue for a few weeks.

The April 13 issue of the The Chronicle of Higher Education contained an interesting article by Walter Tschinkel, a biology prof at Florida State University, called Just Scoring Points. (Hurry to read the full article now... The Chronicle has let down its pay wall through May 8.)

Tschinkel argues that the common metaphors we use for teaching and learning -- filling empty vessels and building an edifice -- lead us in the wrong direction from how students actually think. He offers instead a sports metaphor:

When you play a sport, your preparation reaches a crescendo just before a match (exam). If you win the match (exam), you get points (grades) in proportion to your placement. You keep track of those points, strategizing about how to get more next time. The match leaves no residue other than the points. At the end of college, you enter the working world with your overall standing (grade-point average) and little more.

The metaphor isn't perfect, but it doesn't need to be. It only has to offer us some insight we might otherwise not have had. The points analogy explains why so many students want to know, "Is this going to be on the test?", and why a study guide prepared by the teacher is more appealing than a study guide they prepare for themselves. It accounts for how this semester so many students can not know something they demonstrated knowing just last semester.

And worse -- or should I say "better"!? -- this metaphor may give us insight into how we instructors contribute to the problem in how we ask questions, assign work, and evaluate our students' performance. We treat grades like times in a track meet or points on the basketball court,so why shouldn't our students. Maybe they are just adapting to the strange world we immerse them in. If we focused on competencies instead, they might take the knowledge they might accrue more seriously, too.

This article offered another near-coincidence with my recent blogging, related to my observation about lack of internalization. Tschinkel describes giving a pop quiz on material he covered meticulously and unambiguously in a previous lecture. Only a quarter of his students knew the answer. He explained the material again and then gave them another pop quiz the next session. 35 percent. He explained the material yet again and gave them another pop quiz a week later. The result? 60 percent -- the same percentage who answered the same question on the final exam. That was in his freshman course; in his upper-division course, everyone finally gave the right answer -- on the fourth iteration. Sigh.

My experiment turned out better. Every student in my programming languages course ran short of time on their third scheduled quiz. I decided that it was better to find out what they know than to find out only that they needed more time, so when they arrived for our next class I handed them their quiz papers unannounced and gave them 15 more minutes to finish their answers. I told them that they were also free to change any answer they wished, even if they had gone home to look the answer in the meantime. That seemed like a net win: to score those points, students would have had to care enough to look up the answers after the fact! All but one student put their unexpected time to good use, and quiz scores turned out pretty well. Not as well as I might have hoped, but better.

Tschinkel's prescription is one most of us already know is good for us, even if we don't always practice what we know: stop lecturing all the time, ask students questions that require understanding to answer, and integrate material throughout our curriculum rather than teaching a bunch of artificially stand-alone courses. Reading, writing, and discussing material takes more time, so we will probably cover less content, but remember: that's okay. Our students might actually learn more.

Sometimes a fun little movie can be more than sugar. It can remind us of an unexamined metaphor we live by. Even if it does so in French:

Vous savez ce qui sépare les gagnants des perdants? Les points.

(No, I don't speak French, but Google does.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 05, 2007 1:32 PM

Q School, Taking Exams, and Learning Languages

My previous entry has been a slowly-dawning realization. On Thursday, I felt another trigger in a similar vein. I heard an interview with John Feinstein, who was promoting his new book on Q school. "Q School" is how professional golfers qualify for the PGA tour. This isn't just a little tournament. The humor and drama in some of these stories surprised me. By nearly everyone's standard, all of these players are incredibly good, much, much better than you and especially me. What separates those who make it to the Tour from those who don't?

Feinstein said that professional golfers acknowledge there are five levels of quality in the world of golf:

  1. hitting balls on the range, well
  2. playing a round of golf, well
  3. playing well in a tournament
  4. playing well in a tournament under pressure, with a chance to win
  5. playing well in a major under pressure, with a chance to win

What separates those who it make from those who don't is the gap between levels 3 and 4. Then, once folks qualify for the tour, the gap between levels 4 and 5 becomes a hurdle for players who want to be "players", not just the journeymen who make a good living but in the shadows of the best.

I see the same thing in the world of professional tennis. On a smaller stage, I've experienced these levels as a competitor -- playing chess, competing in various academic venues, and doing academic work.

What does it take to make a step up to the next level? Hard work. Physical work or, in chess and academia, intellectual work. But mostly, it is mental. For most of the guys at Q school,and for many professional golfers and tennis players, the steps from 3 to 4 and from 4 to 5 are more about the mind than the body, more about concentration than skill.

Feinstein related a Tiger Woods statistic that points to this difference in concentration. On the PGA Tour in 2005, Tiger faced 485 putts of 5 feet or less. The typical PGA Tour pro misses an average of 1 such putt each week. Tiger missed 0. The entire season.

Zero is not about physical skill. Zero is about concentration.

This sort of concentration, the icy calm of the great ones, probably demoralizes many other players on the tour, especially the guys trying to make the move from level 4 to level 5. It might well infuriate that poor guy simply trying to qualify for the tour. He may be doing every thing he possibly can to improve his skills, to improve his mental approach. Sometimes, it just isn't enough.

Why did this strike me as relevant to my day job? I listened to the Feinstein interview on the morning I gave my final exam for the semester.

Students understand course material at different levels. That is part of what grades are all about. Many students perform at different levels, on assignments and on exams. At exam time, and especially at final exam time, many students place great hope in the idea that they really do get it, but that they just aren't able to demonstrate it on the exam.

There may be guys in Q School harboring similar hope, but reality for them is simple: if you don't demonstrate, you don't advance.

It's true that exam performance level is often not the best predictor of other kinds of performance in the world, and some students far exceed their academic performance level when they reach industry. But most students' hopes in this regard are misplaced. They would be much better off putting their energy into getting better. Whether they really are better than their exam performance or are in need of better exam performance skills, getting better will serve them well.

But it's more than just exams. There are different levels of understanding in everything we learn, and sometimes we settle for less than we can achieve. That's what my last entry was trying to say -- there is a need to graduate from the level at which one requires external reference to a level at which one has internalized essential knowledge and can bring it to bear when needed.

I am sure someone will point me to Bloom's taxonomy, and it is certainly relevant. But that taxonomy always seems so high-falutin' to me. I'm thinking of something closer to earth, something more concrete in terms of how we learn and use programming languages. For example, there might be five levels of performance with a programming language feature:

  1. recognize an idea in code
  2. program with the idea, using external references
  3. program with the idea, without external reference, but requiring time to "reinvent" the idea
  4. program with the idea, fluently
  5. program with the idea, fluently and under pressure

I don't know if there are five levels here, or if these are the right levels, but they seem a reasonable first cut for Concourse C at Detroit Metro. (This weekend brings the OOPSLA 2007 spring planning meeting in one of North America's great international cities, Montreal.) But this idea of levels has been rolling around my mind for a while now, and this interview has brought it to the top of my stack, so maybe I'll have something more interesting to say soon.

The next step is to think about how all this matters to my students, and to me as an instructor. Knowing about the levels, what should students do? How might I feed this back into how I teach my course and how I evaluate students?

For now, my only advice to students is to do what I hope to do in a week or so: relax for a few minutes at the advent of summer!


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 04, 2007 11:12 PM

Internalized Knowledge and External Triggers

With the end of finals week, I've begun thinking about my experience teaching this semester. I graded the last homework assignment for my course this week and ran across this comment in one of the submissions:

I spent 4 hours or so this past week cobbling bits together, and tried to assemble them into a coherent mess, but failed. All I have are some run-time errors that don't mean much to me. At this point I would rather write the entire interpreter in another language than use Scheme.

The last sentence made me sad, even as I know that not every student will leave the course enamored with what I've taught. I was again struck to receive such a submission with no questions asked, despite e-mail everywhere and plentiful, often empty office hours. I run into this sort of submission more frequently these days.

This term, I've also encountered student time trouble on quizzes more frequently. Even students who seem to understand the material well have run short of time. The standard comment is, "If I just had more time...". Now, if you've had me for class, even this class, you can appreciate the sentiment. But I do have a pretty good sense of what most students have been able to accomplish in the allotted quiz times over past few years in this course. Is time trouble more frequent these days? I think so.

I'm not one who like to talk about students in the "good ol' days". Students were no smarter, no better, and no harder working when I was in school. To think so is usually just selective (and aging) memory. But I do believe that occasionally there are systematic changes in how students behave, and recognizing these changes is important if we intend to teach them -- help them learn -- effectively.

Consider students running short of time on a quiz. I think I understand at least part of the problem now. Students these days need a "crutch": access to reference material. Open notes are an example. Students love the idea of open-book exams. And how do they program? With uninterrupted access to all the reference material they want. The help desk in Dr. Scheme contains everything they need to know about Scheme, and then some. So, there is no need to memorize syntax. If they need help writing a letrec? Type that string into the Help Desk search box, hit return, see the canonical form, and then maybe even copy and paste an example into their code. The web and Google are likewise at their ubiquitous disposal, ready to serve their every need at any moment.

This is how they expect the world to be.

But that isn't how exams work, or job interviews, or most jobs. You have to internalize some knowledge to be effective.

What was it like in the old days? I think just as many students had a similar resistance to avoid internalizing, but we all had fewer alternatives. We didn't have the technology to pull it off, so we adapted. We crammed and memorized. Our only other choice was to find a new major. (And what major didn't require knowing some stuff -- at least any major that also offered the prospects of a paying job?)

Maybe our technology is making us dumber because we become less adaptive to our surroundings. I think that's probably an overstatement. Perhaps even the opposite is true: we are becoming more enabled, by offloading unnecessary data and focusing on more important stuff. I think that's probably an overstatement, too. In any case, the phenomenon is probably something to be aware of.

As an instructor, what is my solution? I can imagine many possibilities. Shorten my exams and quizzes. Allow external support, such open books and notes. Give my exams on-line, in an environment that more closely resembles the students' working environment. Educate them about the way the world works, and try to help them more directly to move toward a world in which they internalize basic knowledge and skills. I think the last of these is the right answer, and worthy of the effort, even if it turns out to be a futile attempt. But whatever I try, being aware of my students's more fundamental dependence on extrenal triggers will help me out in future courses. If I take it into account...


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 28, 2007 12:55 PM

Open Mind, Closed Mind

I observed an interesting phenomenon working in a group this morning.

Another professor, an undergrad student, and I are at Duke University this weekend for a workshop on peer-led team-learning in CS courses. This is an idea borrowed from chemistry educators that aims to improve recruitment and retention, especially among underrepresented populations. One of our first activities was to break off into groups of 5-8 faculty and so a sample session PLTL class session, led by an experienced undergrad peer leader from one of the participating institutions. My group's leader was an impressive young women from Duke who is headed to Stanford this fall for graduate work in biomedical informatics.

One of the exercises our group did involved Soduku. First, we worked on a puzzle individually, and then we came back together to work as a group. I finished within a few minutes, before the leader called time, while no one else had filled in much of the grid yet.

Our leader asked us to describe bits about how we had solved the puzzle, with an eye toward group-writing an algorithm. Most folks described elements of the relatively naive combinatorial approach of immediately examining constraints on individual squares. When my turn came, I described my usual approach, which starts with a preprocessing of sorts that "cherry picks" obvious slots according to groups of rows and columns. Only later do I move on to constraints on individual squares and groups of squares.

I was surprised, because no one seemed to care. They seemed happy enough with the naive approach, despite the fact that it hadn't served them all that while solving the puzzle earlier. Maybe they dismissed my finishing quickly as an outlier, perhaps the product of a Soduku savant. But I'm no Soduku savant; I simply have had a lot of practice and have developed one reasonably efficient approach.

The group didn't seem interested in a more efficient approach, because they already knew how to solve the problem. My approach didn't match their own experiences, or their theoretical understanding of the problem. They were comfortable with their own understanding.

(To be honest, I think that most of them figured they just needed to "go faster" in order to get done faster. If you know your algorithms, you know that going faster doesn't help at all with many, many algorithms! We still wouldn't get done.)

Dr. Phil -- How's that workin' for ya?

After making this observation, I also had a realization. In other situations, I behave just like this. Sometimes, I have an idea in mind, one I like and am comfortable with, and when confronted with something that might be better, I am likely to dismiss it. Hey, I just need to tweak what I already know. Right. I imagine Dr. Phil asking in his Texas drawl, "How's that workin' for ya?" Not so well, but with a little more time...

When I want to learn, entering a situation with a closed mind is counterproductive. This is, of course, true when I walk into the room saying, "I don't want to learn anything new." But it is just as important, and far more dangerous, when I think I want to learn but am holding tightly to my preconceptions and idiosyncratic experiences. In that case, I expect that I will learn, but really all I can do is rearrange what we already know. And I may end up disappointed when I don't make a big leap in knowledge or performance.

One of the PLTL undergrad leaders working with us gets it. He says that one of the greatest benefits of being a peer leader is interacting with the students in his groups. He has learned different way to approach many specific problem and different high-level approaches to solving problems more generally. And he is a group leader.

Later we had fun with a problem on text compression, using Huffman coding as our ultimate solution. I came up with an encoding targeted to a particular string, which used 53 bits instead of the 128 bits of a standard ASCII encoding. No way a Huffman code can beat that. Part way through my work on my Huffman tree, I was even surer. The end result? 52 bits. It seems my problem-solving ego can be bigger than warranted, too. Open mind, open mind.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 27, 2007 6:04 PM

Welcome to a New Century

While at Iowa State to hear Donald Norman speak at the HCI forum a couple of days ago, I spent a few minutes wandering past faculty offices. I love to read what faculty post on and around their doors -- cartoons, quotes, articles, flyers, posters, you name it. Arts and humanities offices are often more interesting than science faculty offices, at least in a lateral-thinking way, but I enjoy them all.

At ISU, one relatively new assistant prof had posted the student evaluations from his developmental robotics course. Most were quite positive and so make for good PR in attracting students, but he posted even the suggestions for improvement.

My favorite student quote?

It's nice to take a CS course that wasn't designed in the '70s.

Spot on. I wonder just how archaic most computer science courses must seem to students who were born in the late 1980s. Gotta teach those fundamentals!


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 23, 2007 3:44 PM

Discipline and Experience

I don't have time to keep up with the XP mailing list these days, especially when it spins off into a deep conversation on a single topic. But while browsing quickly last week before doing an rm on my mailbox, I ran across a couple of posts that deserved some thought.

Discipline in Practice

In a thread that began life as "!PP == Hacking?", discussion turned to how much discipline various XP practices demand, especially pair programming and writing tests (first). Along the way, Ron Jeffries volunteered that he is not always able to maintain discipline without exception: "I am the least disciplined person you know ..."

Robert Biddle followed:

I was pleased to read this. I'm always puzzled when people talk about discipline as a good thing in itself. I would consider it a positive attribute of a process that it required less, rather than more, discipline.

One of the things I've learned over the years is that, while habit is a powerful mechanism, processes that require people to pay close attention to details and to do everything just so are almost always destined to fail for the majority of us. It doesn't matter if the process is a diet, a fitness regimen, or a software methodology. People eventually stop doing them. They may say that they are still following the process, but they aren't, and that may be worse than just stopping. Folks often start the new discipline with great energy, psyched to turn over a new leaf. But unless they can adapt their thinking and lifestyle, they eventually tire of having to follow the rules and backslide. Alistair Cockburn has written a good agile software book that starts with the premise that any process must build on the way that human minds really work.

Later in the thread, Robert notes that -- contrary to what many folks who haven't tried XP for very long think -- XP tolerates a lower level of discipline than other methodologies:

For example, as a developer, I like talking to the customer lots; and as a customer I like talking to developers. That takes way less discipline for me than working with complex specs.

My general point is that it makes sense for processes and tools to work with human behaviour, supporting and protecting us where we are weak, and empowering us where we are strong.

He also points out that the boundary between high discipline and low discipline probably varies from person to person. A methodology (or diet, or training regimen) capable of succeeding within a varied population must hit close to the population's typical threshold and be flexible enough that folks who lie away from that threshold have a chance find a personal fit.

As methodology, XP requires us to change how we think, but it places remarkably few picayune demands on our behavior. A supportive culture can go a long toward helping well-intended newbies give it a fair shake. And I don't think it is as fragile in its demands as the Atkins Diet or many detailed running plans for beginners.

Accelerating Experience

In a different thread, folks were discussing how much experience one needs in order to evaluate a new methodology fairly, which turned to the bigger question of how much project experience one can realistically obtain. Projects that last 6 months or 2 years place something of an upper bound on our experience. I like Laurent Bossavit's response:

> It takes a lot of time to get experienced.
> How many software development projects can you
> experience in a life-time? How many can you
> experience with three years of work experience?

Quite a lot, provided they're small enough.

Short cycles let you make mistakes faster, more often. They let us succeed faster. They let us learn faster.

A post later in this thread by someone else pointed out that P and other agile approaches change the definition of a software project. Rather than thinking in terms of a big many-month or multi-year project, we think in terms of 2- or 3-week releases. These releases embody a full cycle from requirements through delivery, which means that we might think of each release as a "whole" project. We can do short retrospectives, we can adapt our practices, we can shift direction in response to feedback.

Connections... This reminds me of an old post from Laurent's blog that I cited in an old post of my own. If you want to increase your success rate, double your failure rate; if you want to double your failure rate, all you have to do is halve your project length. Ideas keep swirling around.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 21, 2007 4:54 PM

Making Something Tangible

It occurred to me recently one thing that makes administrative work different from my previous kinds of work, something that accounts for an occasional dissatisfaction that I never used to feel as a matter of course.

In my administrative role, I can often work long and hard without producing anything.

It's not that I don't do anything as department head. It's just that the work doesn't always result in a product, something tangible, something complete that one can look to and say, "I made that." Much of a head's work is about relationships, interaction, and one-one interaction. These are all valuable outcomes, and they may result in something tangible down the road. Meeting with students, parents of prospective students, industry partners, or prospective donors all may result in something tangible -- eventually. And the payoff -- say, from a donor -- can be quite tangible, quite sizable! But in the meantime, I sometimes feel like, "What did I accomplish today?"

This realization struck me a week or so back when I finished producing the inaugural issue of my department's new newsletter. I wrote nearly all of the content, arranged the rest, and did all of the image preparation and document layout. When I got done, I felt that sense one gets from making something.

I get that feeling when I write software. I think that one of the big wins from small, frequent releases is the shot of adrenaline that it gives the developers. We rarely talk about this advantage, instead speaking of the value of the releases in terms of customer feedback and quality. But the buzz that we developers feel in producing a whole something, even if it's a small whole, probably contributes more than we realize to motivation and enjoyment. That's good for the developers, and for the customer, too.

I get that feeling when I write code and a lesson for teaching a class, too. The face-to-face delivery creates its own buzz.

This makes me wonder how students feel about frequent release dates, or small, frequent homework assignments. I often use this approach in my courses, but again more for "customer-side" and quality reasons. Maybe students feel good producing something, making tangible progress, every week or so? Or does the competing stress of several other courses, work, and life create an overload? Even if students prefer this style, it does create a new force to be addressed: small frequent failures must be horribly disheartening. I need to be sure that students feel challenge and success.

Sheepishly, I must admit that I've never asked my students how they feel about this. I will next week. If you want to share your thoughts, please do.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

April 19, 2007 4:05 PM

Walking Out The Door

Today I am reminded to put a variant of this pattern into practice:

The old-fashioned idea (my door is always open; when you want to talk, c'mon in) was supposed to give people down the line access to you and your ears. The idea was that folks from layers below you would come and clue you in on what was really happening.

I don't think that ever worked for most of us. Most folks didn't have the courage to come in, so we only learned what was on the minds of the plucky few. We were in our environment, not theirs. We couldn't verify what we were hearing by looking, touching, and listening in the first person. And we got fat from all that sitting.

I ran into this quote in Jason Yip's post Instead of opening the door, walk through it. Jason is seconding an important idea: that an open door policy isn't enough, because it leaves the burden for engaging in communication on others -- and there are reasons that these other folks may not engage, or want to.

This idea applies in the working-world relationship between supervisors and their employees, but it also applies to the relationship between a service provider and its customers. This includes software developers and their customers. If we as software developers sit in a lab, even with our door open, our customer may never come in to tell us what they need. They may be afraid to bother us; they may not know what they need. Agile approaches seek to reduce the communication gap between developers and customers, sometimes to the point of putting them together in a big room. And these approaches encourage developers to engage the customer in frequent communication in a variety of ways, from working together on requirements and acceptance to releasing working software for customer use as often as possible.

As someone who is sitting in a classroom with another professor and a group of grad students just now, I can tell you that this idea applies to teachers and students. Two years ago tomorrow, I wrote about my open office hours -- they usually leave me lonely like the Maytag Repairman. Learning works best when the instructor engages the student -- in the classroom and in the hallway, in the student union, on the sidewalk, and at the ballgame. Often, students yearn to be engaged, and learning is waiting to happen. It may not happen today, in small talk about the game, but at some later time. But that later time may well depend on the relationship built up by lots of small talk before. And sometimes the learning happens right there on the sidewalk, when the students feel able to ask their data structures question out among the trees!

But above, I said that today reminded me of a variant of this pattern... Beginning Monday and culminating today, I was fortunate to have a member of my department engage me in conversation, to question a decision I had made. Hurray! The open door (and open e-mail box) worked. We have had a very good discussion by e-mail today, reaching a resolution. But I cannot leave our resolution sitting in my mail archive. I have to get up off my seat, walk through the door, and ensure that the discussion has left my colleague satisfied, in good standing with me. I'm almost certain it has, as I have a long history with this person as well as a lot of history doing e-mail.

But I have two reasons to walk through the door and engage now. First, my experience with e-mail tells me that sometimes I am wrong, and it is almost always worth confirming conclusions face-to-face. If I were "just" faculty, I might be willing to wait until my next encounter with this colleague to do the face-to-face. My second reason is that I am department head these days. This places a burden on communication, due to the real and perceived differences in power that permeate my relationships with colleagues. The power differential means that I have to take extra care to ensure that interactions, whether face to face or by e-mail, are building our relationship and not eroding it. Still being relatively new to this management business, it still feels odd that I have to be careful in this way. These folks are my colleagues, my friends! But as I came to realize pretty quickly, moving into the Big Office Downstairs changes things, whatever we may hope. The best way to inoculate ourselves from the bad effects of the new distance? Opening the door and walking through it.

Oh, and this applies to the relationship between teachers and students, too. I understand that as an advisor to my grad students, having been a grad student whose advisor encouraged healthy and direct communication. But I see it in my relationship with undergraduates, too, even in the classroom. A little care tending one-on-one and one-on-many relationships goes a long way.

(And looking back at that old post about the Friends connection, I sometimes wonder if any of my colleagues has a good Boss Man Wallingford impression yet. More likely, one of my students does!)


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

April 14, 2007 3:38 PM

If Only We Had More Time...

Unlike some phrases I find myself saying in class, I don't mind saying this one.

Used for the wrong reasons, it would signal a problem. "If we had more time, I would teach you this important concept, but..." ... I've left it out because I didn't plan the course properly. ... I've left it out because preparing to teach it well would take too much time. ... I'm running behind; I wasted too much time speaking off-topic. There are lots of ways that not covering something important is wrong.

But there is a very good reason why it's not possible to cover every topic that comes up. There is so much more! There are more interesting ideas in this world -- in programming languages, in object-oriented programming, in algorithms -- than we can cover in a 3-credit, 15-week course. The ideas of computing are bigger than any one course, and some of the cool things we do in class are only the beginning. This is a good thing. Our discipline is deep, and it rewards the curious with unexpected treasures.

More practically, "If only we had more time..." is a cue to students who do have more time -- graduate students looking for research projects, undergrad honors students looking for thesis topics, an undergrads who might be thinking of grad school down the line.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 12, 2007 6:54 PM

Agile Moments: Accountability and Continuous Feedback in Higher Ed

It's all talk until the tests run.
-- Ward Cunningham

A couple of years ago, I wrote about what I call my Agile Moments, and soon after wrote about another. If I were teaching an agile software development course, or some other course with an agile development bent, I'd probably have more such posts. (I teach a compiler development course this fall...) But I had an Agile Moment yesterday afternoon in an un-software-like place: a talk on program assessment at universities.

Student outcomes assessment is one of those trendy educational movements that comes and go like the seasons or the weather. Most faculty in the trenches view it with unrelenting cynicism, because they've been there before. Some legislative body or accrediting agency or university administrator decides that assessment is essential, and they deem it Our Highest Priority. The result is an unfunded mandate on departments and faculty to create an assessment plan and implement the plan. The content and structure of the plans are defined from above, and these are almost always onerous -- they look good from above but they look like unhelpful busy work to professors and students who just want to do computer science, or history, or accounting.

But as a software developer, and especially as someone with an agile bent, I see the idea of outcomes assessment as a no-brainer. It's all about continuous feedback and accountability.

Let's start with accountability. We don't set out to write software without a specification or a set of stories that tell us what our goal is. Why do we think we should start to teach a course -- or a four-year computer science degree program! -- without having a spec in hand? Without some public document that details what we are trying to achieve, we probably won't know if we are delivering value. And even if we know, we will probably have a hard time convincing anyone else.

The trouble is, most university educators think that they know what an education looks like, and they expect the rest of the world to trust them. For most of the history of universities, that's how things worked. Within the university, faculty shared a common vision of what to do when, and outside the students and the funders trusted them. The relationship worked out fine on both ends, and everyone was mostly happy.

Someone at the talk commented that the call for student and program outcomes assessment "break the social contract" between a university and its "users". I disagree and think that the call merely recognizes that the social contract is already broken. For whatever reason, students and parents and state governments now want the university to demonstrate its accountable.

While this may be unsettling, it really shouldn't surprise us. In the software world, most anyone would find it strange if the developers were not held accountable to deliver a particular product. (That is even more true in the rest of the economy, and this difference is the source of much consternation among folks outside the software world -- or the university.) One of the things I love about what Kent Beck has been teaching for the last few years is the notion of accountability, and the sort of honest communication that aims at working fairly with the people who hire us to build software. I don't expect less of my university.

In the agile software world, we often think about to whom they are accountable, and even focus on the best word to use, to send the right message: client, customer, user, stakeholder, .... Who is my client when I teach a CS course? My customer? My stakeholders? These are complex question, with many answers depending on the type of school and the level at which we ask them. Certainly students, parents, the companies who hire our graduates, the local community, the state government, and the citizens of the state are all partial stakeholders and thus potential answers as client or customer.

Outcomes assessment forces an academic department to specify what it intends to deliver, in a way that communicate the end product more effectively to others. This offers better accountability. It also opens the door to feedback and improvement.

When most people talk about outcomes assessment, they are thinking of the feedback component. As an agile software developer, I know that continuous feedback is essential to keeping me on track and to helping me improve as a developer. Yet we teach courses at universities and offer degrees to graduates while collecting little or no data as we go along. This is the data that we might use to improve our course or our degree programs.

The speaker yesterday quoted someone as saying that universities "systematically deprive themselves" of input from their customers. We sometimes collect data, but usually at the end of the semester, when we ask students to evaluate the course and the instructor using a form that often doesn't tell us what we need to know. Besides, the end of the semester is too late to improve the course while teaching the students giving the feedback!

From whom should I as instructor collect data? How do I use that data to improve a course? How do I use that data to improve my teaching more generally? To whom must I provide an accounting of my performance?

We should do assessment because we want to know something -- because we want to learn how to do our jobs better. External mandates to do outcomes assessment demotivate, not motivate. Does this sound anything like the world of software development?

Ultimately, outcomes assessment comes down to assessing student learning. We need to know whether students are learning what we want them to learn. This is one of those issues that goes back to the old social contract and common understanding of the university's goal. Many faculty define what they want students to know simply as "what our program expects of them" and whether they have learned it as "have they passed our courses?" But such circular definitions offer no room for accountability and no systematic way for departments t get better at what they do.

The part of assessment everyone seems to understand is grading, the assessment of students. Grades are offered by many professors as the primary indicator that we are meeting our curricular goals: students who pass my course have learned the requisite content. Yet even in this area most of us do an insufficient job. What does an A in a course mean? Or an 87%? When a student moves on to the next course in the program with a 72% (a C in most of my courses) in the prerequisite course, does that mean the student knows 72% of the material 100% of the way, 100% of the material 72% of the way, some mixture of the two, or something altogether different? And do we want to such a student writing the software on which we will depend tomorrow?

Grades are of little use to students except perhaps as carrots and sticks. What students really need is feedback that helps them improve. They need feedback that places the content and process they are learning into the context of doing something. More and more I am convinced that we need to think about how to use the idea of course competencies that West and Rostal implemented in their apprenticeship-based CS curriculum as a way to define for students and instructors alike what success in a course or curriculum mean.

My mind made what it thinks is one last connection to agile software development. One author suggests that we think of "assessment as narrative", as a way of telling our story. Collecting the right data at the right times can help us to improve. But it can also help us tell our story better. I think a bit part of agile development is telling our story: to other developers on the team, to new people we hire, to our clients and customers and stakeholders, and to our potential future clients and customers. The continuous feedback and integration that we do -- both on our software and on our teams -- is an essential cog in defining and telling that story. But maybe my mind was simply in overdrive when it made this connection.

It was the at end of this talk that I read the quote which led me to think of Kurt Vonnegut, coincidental to his passing yesterday, and which led me to write this entry. So it goes.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 05, 2007 8:57 PM

Feats of Association

An idea is a feat of association.
-- Robert Frost

Yesterday I went to a talk by Roy Behrens, an earlier talk of whose I enjoyed very much and blogged about. That time he talked about teaching as a "subversive inactivity", and this time he spoke more on the topic of his scholarly interest, in creativity and design, ideas and metaphors, similarities and differences, even camouflage! Given that these are his scholarly interests, I wasn't surprised that this talk touched on some of the same concepts as his teaching talk. There are natural connection between how ideas are formed at the nexus os similarity and difference and how one can best help people to learn. I found this talk energizing and challenging in a different sort of way.

In the preceding paragraph, I first wrote that Roy "spoke directly on the topic of his scholarly interest", but there was little direct about this talk. Instead, Roy gave us parallel streams of written passages and images from a variety of sources. This talk felt much like an issue of his commonplace book/journal Ballast Quarterly Review, which I have blogged about before. The effect was mesmerizing, and it had its intended effect in illustrating his point: that the human mind is a connection-making machine, an almost unwilling creator of ideas that grow out of the stimuli it encounters. We all left the talk with new ideas forming.

I don't have a coherent, focused essay on this talk yet, but I do have a collection of thoughts that are in various stages of forming. I'll share what I have now, as much for my own benefit as for what value that may have to you.

Similarity and difference, the keys to metaphor, matter in the creation of software. James Coplien has written an important book that explicates the roles of commonality analysis and variability analysis in the design of software that can separate domain concerns into appropriate modules and evolve gracefully as domain requirements change. Commonality and variability; similarity and difference. As one of Roy's texts pointed out, the ability to recognize similarity and difference is common to all practical arts -- and to scientists, creators, and inventors.

The idea of metaphor in software isn't entirely metaphorical. See this paper by Noble, Biddle, and Tempero that considers how metaphor and metonymy relate to object-oriented design patterns. These creative fellows have explored the application of several ideas from the creative arts to computing, including deconstruction and postmodernism.

To close, Roy showed us the 1952 short film Blacktop: A Story of the Washing of a School Play Yard. And that's the story it told, "with beautiful slow camera strides, the washing of a blacktop with water and soap as it moves across the asphalt's painted lines". This film is an example of how to make something fabulous out of... nothing. I think the more usual term he used was "making the familiar strange". Earlier in his talk he had read the last sentence of this passage from Maslow (emphasis added):

For instance, one woman, uneducated, poor, a full-time housewife and mother, did none of these conventionally creative things and yet was a marvelous cook, mother, wife, and home-maker. With little money, her home was somehow always beautiful. She was a perfect hostess. Her meals were banquets, her taste in linens, silver, glass crockery and furniture was impeccable. She was in all these areas original, novel, ingenious, unexpected, inventive. I learned from her and others like her that a first-rate soup is more creative than a second-rate painting, and that, generally, (un)cooking or parenthood or making a home could be creative while poetry need not be; it cold be uncreative.

Humble acts and humble materials can give birth to unimagined creativity. This is something of a theme for me in the design patterns world, where I tell people that even novices engage in creative design when they write the simplest of programs and where so-called elementary patterns are just as likely to give rise to creative programs as Factory or Decorator.

Behrens's talk touched on two other themes that run through my daily thoughts about software, design, and teaching. One dealt with tool-making, and the other with craft and limitations.

At one point during the Q-n-A after the talk, he reminisced about Let's Pretend, a radio show from his youth which told stories. The value to him as a young listener lay in forcing -- no, allowing -- him to create the world of the story in his own mind. Most of us these days are conditioned to approach an entertainment venue looking for something that has already been assembled for us, for the express purpose of entertaining ourselves. Creativity is lost when our minds never have the opportunity to create, and when our minds' ability to create atrophies from disuse. One of Roy's goals in teaching graphic design students is to help students see that they have the tools they need to create, to entertain.

This is true for artists, but in a very important sense it is true for computer science students, too. We can create. We can build our own tools--our own compilers, our own IDEs, our own applications, our own languages... anything we need! That is one of the great powers of learning computer science. We are in a new and powerful way masters of our own universe. That's one of the reasons I so enjoy teaching Programming Languages and compilers: because they confront CS students directly with the notion that their tools are programs just like any other. You never have to settle for less.

Finally, may favorite passage from Roy's talk plays right into my weakness for the relationship between writing and programming, and for the indispensable role of limitation in creativity and in learning how to create. From Anthony Burgess:

Art begins with craft, and there is no art until craft has been mastered. You can't create until you're willing to subordinate the creative impulses to the constriction of a form. But the learning of craft takes a long time, and we all think we're entitled to shortcuts.... Art is rare and sacred and hard work, and there ought to be a wall of fire around it.

One of my favorite of my blog posts is from March 2005, when I wrote a piece called Patterns as a Source of Freedom. Only in looking back now do I realize that I quoted Burgess there, too -- but only the sentence about willing subordination! I'm glad that Roy gave the context around that sentence yesterday, because it takes the quote beyond constriction of form to the notion of art growing out of craft. It then closes with that soaring allusion. Anyone who has felt even the slightest sense of creating something knows what Burgess means. We computer scientists may not like to admit that what we do is sometimes art, and that said art is rare and sacred, but that doesn't change reality.

Good talk -- worth much more in associations and ideas than the lunch hour it cost. My university is lucky to have Roy Behrens, and other thinkers like him, on our faculty.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

April 04, 2007 5:57 PM

Science Superstars for an Unscientific Audience

Somewhere, something incredible is waiting to be known.
-- Carl Sagan

Sometime ago I remember reading John Scalzi's On Carl Sagan, a nostalgic piece of hero worship for perhaps the most famous popularizer of science in the second half of the 20th century. Having written a piece or two of my own of hero worship, I empathize with Scalzi's nostalgia. He reminisces about what it was like to be an 11-year-old astronomer wanna-be, watching Sagan on TV, "talk[ing] with celebrity fluidity about what was going on in the universe. He was the people's scientist."

Scalzi isn't a scientist, but he has insight into the importance of someone like Sagan to science:

... Getting science in front of people in a way they can understand -- without speaking down to them -- is the way to get people to support science, and to understand that science is neither beyond their comprehension nor hostile to their beliefs. There need to be scientists and popularizers of good science who are of good will, who have patience and humor, and who are willing to sit with those who are skeptical or unknowing of science and show how science is already speaking their language. Sagan knew how to do this; he was uncommonly good at it.

We should be excited to talk about our work, and to seek ways to help others understand the beauty in what we do, and the value of what we do to humanity. But patience and humor are too often in short supply.

I thought of Scalzi's piece when I ran across a link to the recently retired Lance Fortnow's blog entry on Yisroel Brumer's Newsweek My Turn column Let's Not Crowd Me, I'm Only a Scientist. Seconding Brumer's comments, Fortnow laments that the theoretical computer scientist seems at a disadvantage in trying to be Sagan-like:

Much as I get excited about the P versus NP problem and its great importance to all science and society, trying to express these ideas to uninterested laypeople always seems to end up with "Should I buy an Apple or a Windows machine?"

(Ooh, ooh! Mr. Kotter, Mr. Kotter! I know the answer to that one!)

I wonder if Carl Sagan ever felt like that. Somehow I doubt so. Maybe it's an unfair envy, but astronomy and physics seem more visceral, more romantic to the general public. We in computing certainly have our romantic sub-disciplines. When I did AI research, I could always find an interested audience! People were fascinated by the prospect of AI, or disturbed by it, and both groups wanted to talk about. But as I began to do work in more inward-looking areas, such as object-oriented programming or agile software development, I felt more like Brumer felt as a scientist:

Just a few years ago, I was a graduate student in chemical physics, working on obscure problems involving terms like quantum mechanics, supercooled liquids and statistical thermodynamics. The work I was doing was fascinating, and I could have explained the basic concepts with ease. Sure, people would sometimes ask about my work in the same way they say "How are you?" when you pass them in the hall, but no one, other than the occasional fellow scientist, would actually want to know. No one wanted to hear about a boring old scientist doing boring old science.

So I know the feeling reported by Brumer and echoed by Fortnow. My casual conversation occurs not at cocktail parties (there aren't my style) but at 8th-grade girls' basketball games, and in the hall outside dance and choir practices. Many university colleagues don't ask about what I do at all, at least once they know I'm in CS. Most assume that computers are abstract and hard and beyond them. When conversation does turn to computers, it usually turns to e-mail clients or ISPs. If I can't diagnose some Windows machine's seemingly random travails, I am considered quizzically. I can't tell if they think I am a fraud or an idiot. Isn't that what computer scientists know, what they do?

I really can't blame them. We in computing don't tell our story all that well. (I'm have a distinct sense of deja vu right now, as I have blogged this topic several times before.) The non-CS public doesn't know what we in CS do because the public story of computing is mostly non-existent. Their impressions are formed by bad experiences using computers and learning how to program.

I take on some personal responsibility as well. When my students don't get something, I have to examine what I am doing to see whether the problem is with how I am teaching. In this case, maybe I just need to to be more interesting! At least I should be better prepared to talk about computing with a non-technical audience.

(By the way, I do know how to fix that Windows computer.)

But I think that Brumer and Fortnow are talking about something bigger. Most people aren't all that interested in science these days. They are interested in the end-user technology -- just ask them to show you the cool features on their new cell phones -- but not so much in the science that underlies the technology. Were folks in prior times more curious about science? Has our "audience" changed?

Again, we should think about where else responsibility for such change may lie. Certainly our science has changed over time. It is often more abstract than it was in the past, farther removed from the user's experience. When you drop too many layers of abstraction between the science and the human experience, the natural response of the non-scientist is to view the science as magic, impenetrable by the ordinary person. Or maybe it's just that the tools folks use are so commonplace that they pay the tools no mind. Do us old geezers think much about the technology that underlies pencils and the making of paper?

The other side of this issue is that Brumer found, after leaving his scientific post for a public policy position, that he is now something of a star among his friends and acquaintances. They want to know what he thinks about policy questions, about the future. Ironic, huh? Scientists and technologists create the future, but people want to talk to wonks about it. They must figure that a non-scientist has a better chance of communicating clearly with them. Either they don't fear that something will be lost in the translation via the wonk, or they decide that the risk is worth taking, whatever the cost of that.

This is the bigger issue: understanding and appreciation of science by the non-scientist, the curiosity that the ordinary person brings to the conversation. When I taught my university's capstone course, aimed at all students as their culminating liberal-arts core "experience", I was dismayed by the lack of interest among students about the technological issues that face them and their nation. But it seems sometimes that even CS students don't want to go deeper than the surface of their tools. This is consistent with a general lack of interest in how world works, and the role that science and engineering play in defining today's world. Many, many people are talking and writing about this, because a scientifically "illiterate" person cannot make informed decisions in the public arena. And we all live with the results.

I guess we need our Carl Sagan. I don't think it's in me, at least not by default. People like Bernard Chazelle and Jeannette Wing are making an effort to step out and engage the broader community on its own terms. I wish them luck in reaching Sagan's level and will continue to do my part on a local scale.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

March 29, 2007 4:20 PM

It Seemed Like a Good Idea at the Time

At SIGCSE a couple of weeks ago, I attended an interesting pair of complementary panel sessions. I wrote about one, Ten Things I Wish They Would Have Told Me..., in near-real time. Its complement was a panel called "It Seemed Like a Good Idea at the Time". Here, courageous instructors got up in front of a large room full of their peers to do what for many is unthinkable: tell everyone about an idea they had that failed in practice. When the currency of your profession is creating good ideas, telling everyone about one of your bad ideas is unusual. Telling everyone that you implemented your bad idea and watched it explode, well, that's where the courage comes in.

My favorite story from the panel was from the poor guy who turned his students loose writing Java mail agents -- on his small college's production network. He even showed a short video of one of the college's IT guys describing the effects of the experiment, in a perfect deadpan. Hilarious.

We all like good examples that we can imitate. That's why we are all drawn to panels such as "Ten Things..." -- for material to steal. But other than the macabre humor we see in watching someone else's train wreck, what's the value in a panel full of bad examples?

The most immediate answer is that we may have had the same idea, and we can learn from someone else's bad example. We may decide to pitch the idea entirely, or to tweak our idea based on the bad results of our panelist. This is useful, but the space of ideas -- good and bad -- is large. There are lots of ways to tweak a bad idea, and not all of them result in a good idea. And knowing that an idea is bad leaves us with the real question unanswered: Just what should we do?

(The risk that the cocky among us face is the attitude that says, "Well, I can make that work. Just watch." This is the source of great material for the next "It Seemed Like a Good Idea at the Time" panel!)

All this said, I think that failed attempts are invaluable -- if we examine them in the right way. Someone at SIGCSE pointed out that negative examples help us to create a framework in which to validate hypotheses. This is how science works from failed experiments. This idea isn't news to those of us who like to traffic in patterns. Bad attempts put us on the road to a pattern. We discover the pattern by using the negative example to identify the context our problem lies and the forces that drive a solution. Sometimes a bad idea really was a good idea -- had it only been applied in the proper context, where the forces at play would have resolved themselves differently. We usually only see these patterns after looking at many, many examples, both good and bad, and figuring what makes them tick.

A lot of CS instruction aims to expose students to lots of examples, in class and in programming assignments. Too often, though, we leave the student discover context and forces on their own, or to learn them implicitly. This is one of the motivations of my work on elementary patterns, to help guide students in the process of finding patterns in their and other people's experiences.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

March 22, 2007 6:53 PM

Patterns in Space and Sound -- Merce Cunningham

A couple of nights I was able to see a performance by the Merce Cunningham Dance Company here on campus. This was my first exposure to Cunningham, who is known for his exploration of patterns in space and sound. My knowledge of the dance world is limited, but I would call this "abstract dance". My wife, who has some background in dance, might call it something else, but not "classical"!

The company performed two pieces for us. The first was called eyeSpace, and it seemed the more avant garde of the two. The second, called Split Sides, exemplifies Cunningham's experimental mindset quite well. From the company's web site:

Split Sides is a work for the full company of fourteen dancers. Each design element was made in two parts, by one or two artists, or, in the case of the music, by two bands. The order in which each element is presented is determined by chance procedure at the time of the performance. Mathematically, there are thirty-two different possible versions of Split Sides.

And a mathematical chance it was. At intermission, the performing arts center's director came out on stage with five people, most local dancers, and a stand on which to roll a die. Each of the five assistants in turn rolled the die, to select the order of the five design elements in question: the pieces, the music, the costumes, the backgrounds, and a fifth element that I've forgotten. This ritual heightened the suspense for the audience, even though most of us probably had never seen Split Sides before, and must have added a little spice for the dancers, who do this piece on tour over and over.

In the end, I preferred the second dance and the second piece of music (by Radiohead), but I don't know to what extent this enjoyment derived from one of the elements or the two together. Overall, I enjoyed the whole show quite a bit.

Not being educated in dance, my take on this sort of performance is often different from the take of someone who is. In practice, I find that I enjoy abstract dance even more than classical. Perhaps this comes down to me being a computer scientist, an abstract thinker who enjoys getting lost in the patterns I see and hear on stage. A lot of fun comes in watching the symmetries being broken as the dance progresses and new patterns emerge.

Folks trained in music may sometimes feel differently, if only because the patterns we see in abstract dance are not the patterns they might expect to see!

Seeing the Merce company perform reminded of a quote about musician Philip Glass, which I ran across in the newspaper while in Carefree for ChiliPLoP:

... repetition makes the music difficult to play.

"As a musician, you look at a Philip Glass score and it looks like absolutely nothing," says Mark Dix, violist with the Phoenix Symphony, who has played Glass music, including his Third String Quartet.

"It looks like it requires no technique, nothing demanding. However, in rehearsal, we immediately discovered the difficulty of playing something so repetitive over so long a time. There is a lot of room for error, just in counting. It's very easy to get lost, so your concentration level has to be very high to perform his music."

When we work in the common patterns of our discipline -- whether in dance, music, or software -- we free our attention to focus on the rest of the details of our task. When we work outside those patterns, we are forced to attend to details that we have likely forgotten even existed. That may make us uncomfortable, enough so that we return to the structure of the pattern language we know. That's not necessarily a bad thing, for it allows us to be productive in our work.

But there can be good in the discomfort of the broken pattern. One certainly learns to appreciate the patterns when they are gone. The experience can remind us why they are useful, and worth whatever effort they may require. The experience can also help us to see the boundaries of their usefulness, and maybe consider a combination, or see a new pattern.

Another possible benefit working without the usual patterns is hidden in Dix's comments above. Without the patterns, we have to concentrate. This provides a mechanism whereby we attend to details and hone our concentration, our attention to detail. I think it also allows us to focus on a new technique. Programming at the extremes, without an if-statement, say, forces you to exercise the other techniques you know. The result may be that you are a better user of polymorphism even after you return to the familiar patterns that include imperative selection.

And I can still enjoy abstract dance and music as an outsider.

There is another, more direct connection between Cunningham's appearance and software. He has worked with developers to create a new kind of choreography software called DanceForms 1.0. While his troupe was in town, they asked the university to try to arrange visits with computer science classes to discuss their work. We had originally planned for them to visit our User Interface Design course and our CS I course (which has a media computation theme), but schedule changes on our end prevented that. I had looked forward to hearing Cunningham discuss what makes his product special, and to see how they had created "palettes of dance movement" that could be composed into dances. That sounds like a language, even if it doesn't have any curly braces.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 15, 2007 4:55 PM

Writing about Doing

Elvis Costello

Last week I ran across this quote by noted rocker Elvis Costello:

Writing about music is like dancing about architecture -- it's really a stupid thing to want to do.

My immediate reaction was an intense no. I'm not a dancer, so my reaction was almost exclusively to the idea of writing about music or, by extension, other creative activities. Writing is the residue of thinking, an outward manifestation of the mind exploring the world. It is also, we hope, occasionally a sign of the mind growing, and those who read can share in the growth.

I don't imagine that dancing is at all like writing in this respect.

Perhaps Costello meant specifically writing about music and other pure arts. But I did study architecture for a while, and so I know that architecture is not a pure art. It blends the artistic and purely creative with an unrelenting practical element: human livability. People have to be able to use the spaces that architects create. This duality means that there are two levels at which one can comment on architecture, the artistic and the functional. Costello might not think much of people writing about the former, but he may allow for the value in people writing about the latter.

I may be overthinking this short quote, but I think it might have made more sense for Costello to have made this analogy: "Writing about music is like designing a house about dancing ...". But that doesn't have any of the zip of his original!

I can think of one way in which Costello's original makes some sense. Perhaps it is taken out of context, and implicit in the context is the notion of only writing about music. When someone is only a critic of an art form, and not a doer of the art form, there is a real danger of becoming disconnected from what practitioners think, feel, and do. When the critic is disconnected from the reality of the domain, the writing loses some or all of its value. I still think it is possible for an especially able mind to write about without doing, but that is a rare mind indeed.

What does all this have to do with a blog about software and teaching? I find great value in many people's writing about software and about teaching. I've learned a lot about how to build more expressive, more concise, and more powerful software from people who have shared their experiences writing such software. The whole software patterns movement is founded upon the idea that we should share our experiences of what works, when and why. The pedagogical patterns community and the SIGCSE community do the same for teachers. Patterns really do have to be founded in experience, so "only" writing patterns without practicing the craft turns out to be a hollow exercise for both the reader and the writer, but writing about the craft is an essential way for us to share knowledge. I think we can share knowledge both of the practical, functional parts of software and teaching and of the artistic element -- what it is to make software that people want to read and reuse, to make courses that people want to take. In these arts, beauty affects functionality in a way that we often forget.

I don't yet have an appetite for dancing about software, but my mind is open on the subject.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 12, 2007 12:34 PM

SIGCSE Day 3: Jonathan Schaeffer and the Chinook Story

The last session at SIGCSE was the luncheon keynote address by Jonathan Schaeffer, who is a computer scientist at the University of Alberta. Schaeffer is best known as the creator of Chinook, a computer program that in the mid 1990s became the second best checker player in the history of the universe and that, within three to five months, will solve the game completely. If you visit Chinook's page, you can even play a game!

I'm not going to write the sort of entry that I wrote about Grady Booch's talk or Jeannette Wing's talk, because frankly I couldn't do it justice. Schaeffer told us the story of creating Chinook, from the time he decided to make an assault on checkers, through Chinook's rise and monumental battles with world champion Marion Tinsley, up to today. What you need to do is to go read One Jump Ahead, Schaeffer's book that tells the story of Chinook up to 1997 in much more detail. Don't worry if you aren't a computer scientist; the book is aimed at a non-technical audience, and in this I think Schaeffer succeeds. In his talk, he said that writing the book was the hardest thing he has ever done -- harder than creating Chinook itself! -- because of the burden of trying to make his story interesting and understandable to the general reader.

If you are a CS professor of student, you'll still learn a lot from the book. Even though it is non-technical, Schaeffer does a pretty good job introducing the technical challenges that faced his team, from writing software to play the game in parallel across as many processors as he could muster, to building databases of the endgame positions so that the program could play endings perfectly. (A couple of these databases are large by today's standards. Just try to recall how large a billion billion entry table must have seemed in 1994!) He also helps to us feel what he must have felt when non-intellectual problems arose, such as a power failure in the lab that had been computing a database for weeks, or mix-up at the hotel where Chinook was playing its world championship match that resulted in oven-like temperatures in the playing room. This snafu may account for one of Chinook's losses in that match.

As a computer scientist, what I found most compelling about the talk was reading about the dreams, goals, and daily routine of a regular computer scientist. Schaeffer is clearly a bright and talented guy, but he tells his story as one of an Everyman -- a guy with a big ego who obsessively pursued a research goal, whose goal came to have as much of a human element as a technical one. He has added to our body of knowledge, as well as our lore. I think that non-technical readers can appreciate the human intrigue in the Chinook-versus-Tinsley story as well. It's a thriller of a sort, with no violence in its path.

I knew a bit about checkers before I read the book. Back in college, I was trying to get my roommate to join me in an campus-wide chess tournament that would play out over several weeks. I was a chessplayer, but he was only casual, so he decided one way to add a bit of spice was for both of us to enter the checkers part of the same tournament. Neither of us know much about checkers other than how to move the pieces. The dutiful students that we were, we went to Bracken Library and checked out several books on checkers strategy and studied them before the tournament. That's where I learned that checkers has a much narrower search space than chess, and that many of its critical variations are incredibly narrow and also incredibly deep. This helped me to appreciate how Tinsley, the human champion, once computed a variation over 40 moves long at the table while playing Chinook. (Schaeffer did a wonderful job explaining the fear this struck in him and his team: How can we beat this guy? He's more of a machine than our program!)

That said, knowing how to play checkers will help as you read the book, but it's not essential. If you do know, dig out a checkers board and play along with some of the game scores as you read. To me, that added to the fun.

Reading the book is worth the effort only to learn about Chinook's nemesis, Marion Tinsley ( Chinook page | wikipedia page), the 20th-century checkers player (and math Ph.D. from Ohio State) who until the time of his death was the best checkers player in the world, almost certainly the best checkers player in history, and in many ways unparalleled by any competitor in any other game or sport I know of. Until his first match against Chinook, Tinsley lost only 3 games in 42 years. He retired through the 1960s because he was so much better than his competition that competition was no fun. The appearance of Chinook on the scene, rather than bothering or worrying him (as it did most in the checkers establishment, and as the appearance of master-level chess programs did at first in the chess world), reinvigorated Tinsley, as it now gave him opponent that played at his level and, even better, had no fear of him. By Tinsley's standard, guys like Michael Jordan, Tiger Woods, and even Lance Armstrong are just part of the pack in their respective sports. Armstrong's prolonged dominance of the Tour de France is close, but Tinsley won every match he played and nearly every game, not just in the single premiere event each year.

The book is good, but the keynote talk was compelling in its own way. Schaeffer isn't the sort of electric speaker that holds his audience by force of personality. He really seemed like a regular guy, but one telling the story of his own passions, in a way that gripped even someone who knew the ending all the way to the end. (His t-shirt with pivotal game positions on both front and back was a nice bit of showmanship!) And one story that I don't remember from the book was even better in person: He talked about how he used lie in bed next to his wife and fantasize... about Marion Tinsley, and beating him, and how hard that would be. One night his wife looked over and asked, "Are you thinking about him again?"

Seeing this talk reminded me of why I love AI and loved doing AI, and why I love being a computer scientist. There is great passion in being a scientist and programmer, tackling a seemingly insurmountable problem and doggedly fighting it to the end, through little triumphs and little setbacks along the way. Two thumbs up to the SIGCSE committee for its choice. This was a great way to end SIGCSE 2007, which I think was one of the better SIGCSEs in recent years.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 10, 2007 12:39 PM

SIGCSE This and That

Here are some miscellaneous ideas from throughout the conference...

Breadth in Year 1

On Friday, Jeff Forbes and Dan Garcia presented the results of their survey of introductory computer science curricula across the country. The survey was methodologically flawed in many ways, which makes it not so useful for drawing any formal conclusions. But I did notice a couple of interesting ways that schools arrange their first-year courses. For instance, Cal Tech teaches seven different languages in their intro sequence. Students must take three -- two required by the department, and a third chosen from a menu of five options. Talk about having students with different experiences in the upper-division courses! I wonder how well their students learn all of these languages (some are small, like Scheme and Haskell), and I wonder how well this would work at other schools.

Many Open Doors

In the same session, I learned that Georgia Tech offers three different versions of CS1: the standard course for CS majors, a robotics-themed course for engineers, and the media computation course that I adopted last fall for my intro course. Even better, they let CS majors take any of the CS1s to satisfy their degree requirement.

This is the sort of first-year curriculum that we are moving to at UNI. For a variety of reasons, we have had a hard time arriving at a common CS1/CS2 sequence that satisfies all of our faculty. We've had parallel tracks in Java and C/C++ for the last few years, and we've decided to make this idea of alternate routes into the major a feature of our program, rather than a curiosity (or a bug!). Next year, we will offer three different CS1/CS2 sequences. Our idea is that with "many open doors", more different kinds of students may find what appeals to them about CS and pursue a major or minor. Recruitment, retention, faculty engagement -- I have high hopes that this broadening of our CS1 options will help our department get better.

No Buzz

Last year, the buzz at SIGCSE was "Python rising". That seemed a natural follow-up to SIGCSE 2005, where the buzz seemed to be "Java falling". But this year, neither of these trends seems to have gained steam. Python is out there seeing some adoptions, but Java remains strong, and it doesn't seem to be going anywhere fast.

I don't feel a buzz at SIGCSE this year. The conference has been useful to me in many ways, and I've enjoyed many sessions. But there doesn't seem to be energy building behind any particular something that points to a different sort of future.

That said, I do notice the return of games to the world of curriculum. Paper sessions on games. Special sessions on games. Game development books at every publisher's booth. (Where did they come from? Books take a long time to write!) Even still, I don't feel a buzz.

The idea that causes the most buzz for me personally is the talk of computational thinking, and what that means for computer science as a major and as a discipline for all.

RetroChic CS

I am sitting in a session on nifty assignments just now. The assignments have ranged from nifty to less so, but the last takes us all back to the '70s and '80s. It is Dave Reed's explorations with ASCII art, modernized as ASCIImations. He started his talk with a quote that seems a fitting close to this entry:

Boring is the new nifty.
-- Stuart Reges, 2006

Up next: the closing luncheon and keynote talk by Jonathan Schaeffer, of Chinook fame.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 10, 2007 10:18 AM

SIGCSE Day 1, Continued: Teaching Honors

In the Teaching Tips session on Thursday, Stuart Reges suggested offering an honors section of CS1. I agree. Many people think that teaching honors students requires more work, or more material, but it doesn't really. It does require deeper engagement, because the students will want to go deeper. From the instructor's perspective, the result is better interaction and the opportunity to talk and be more like a real computer scientist. I've worked with honors students before, and it is a blast when you get even an ordinary group. Unfortunately, we only recently started an honors program at my university, and in that time CS1 enrollments have been too small to afford an honors section.

David Gries commented on Stuart's suggestion, saying something to the effect, "It seems to me that it's the weaker students who need this experience, not the better ones." I also agree with this sentiment. But unless Gries means that we should never create targeted opportunities for our better students, I don't think that my agreement with both is a conflict.

Teaching honors students offers an instructor a great opportunity to experiment with new ideas. I may not want to risk trying a different way of teaching -- say, all exercise-driven, no lecture -- with a group of fifty students in a regular section. If things go wrong, I may have a hard time recovering the semester, and the average and weaker students are the ones with the most to lose. But in a smaller section of more capable students, I can try it out, confident that the students will help me smooth off the rough edges of my new approach and identify the places I need to rethink.

When I have a new idea worked out, I can then transfer it into my regular sections with more personal comfort -- and reasonable assurance that I won't be harming any students in the process of my own learning!

In the worst case of an approach "failing", honors students are better able to roll with the punches and recover with me. Every experienced teacher knows that there are students who will learn what they need in a course no matter what the instructor does, by working on their own, thinking about the important issues, and asking questions. This is the sort of student one usually sees in an honor section.

As Stuart points out, one of the joys of teaching an honors section is that you can discuss whatever you find interesting -- say, a book on computing or a computer scientist, or a current topic in computing that you can relate back to your course. To be honest, I do this in most of my courses anyway, from CS1 to senior project courses. I have to be aware of time constraints imposed by the curriculum, so I can't wax poetic any time I like. (But Owen Astrachan told us on Day 1, we should not paralyze ourselves with the need for to cover more, more, more!) Some of my favorite course sessions in Programming Languages, Algorithms, Object-Oriented Programming, Intelligent Systems, and, yes, CS1 have resulted directly from reading I've done outside of class -- and from attending sessions at OOPSLA, PLoP, and SIGCSE.

All of our students need a good experience. Teaching honors -- or as if you were teaching honors -- is one way to move in that direction.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 09, 2007 9:11 PM

SIGCSE Day 2: Read'n', Writ'n', 'Rithmetic ... and Cod'n'

Grady Booch, free radical

Back in 2005, Grady Booch gave a masterful invited talk to close OOPSLA 2005, on his project to preserve software architectures. Since then, he has undergone open-heart surgery to repair a congenital defect, and it was good to see him back in hale condition. He's still working on his software architecture project, but he came to SIGCSE to speak to us as CS educators. If you read Grady's blog, you'll know that he blogged about speaking to us back on March 5. (Does his blog have permalinks to sessions that I am missing?) There, he said:

My next event is SIGCSE in Kentucky, where I'll be giving a keynote titled Read'n, Writ'n, 'Rithmetic...and Code'n. The gap between the technological haves and have-nots is growing and the gap between academia and the industries that create these software-intensive systems continues to be much lamented. The ACM has pioneered recommendations for curricula, and while there is much to praise about these recommendations, that academia/industry gap remains. I'll be offering some observations from industry why that is so (and what might be done about it).

And that's what he did.

A whole generation of kids has grown up not knowing a time without the Internet. Between the net, the web, iPods, cell phones, video games with AI characters... they think they know computing. But there is so much more!

Grady recently spent some time working with grade-school students on computing. He did a lot of the usual things, such as robots, but he also took a step that "walked the walk" from his OOPSLA talk -- he showed his students the code for Quake. Their eyes got big. There is real depth to this video game thing!

Grady is a voracious reader, "especially in spaces outside our discipline". He is a deep believer in broadening the mind by reading. This advice doesn't end with classic literature and seminal scientific works; it extends to the importance of reading code. Software is everywhere. Why shouldn't read it to learn to understand our discipline? Our CS students will spend far more of their lives reading code than writing it, so why don't we ask them to read it? It is a great way to learn from the masters.

According to a back of the envelope calculation by Grady and Richard Gabriel 38 billion lines of new and modified code are created each year. Code is everywhere.

While there may be part of computer science that is science, that is not what Booch sees on a daily basis. In industry, computing is an engineering problem, the resolution of forces in constructing an artifact. Some of the forces are static, but most are dynamic.

What sort of curriculum might we develop to assist with this industrial engineering problem? Booch referred to the IEEE Computer Society's Software Engineering Body of Knowledge (SWEBOK) project in light of his own software architecture effort. He termed SWEBOK "noble but failed", because the software engineering community was unable to reach a consensus on the essential topics -- even on the glossary of terms! If we cannot identify the essential knowledge, we cannot create a curriculum to help our students learn it.

He then moved on to curriculum design. As a classical guy, he turned to the historical record, the ACM 1968 curriculum recommendations. Where were we then?

Very different from now. A primary emphasis in the '68 curriculum was on mathematical and physical scientific computing -- applications. We hadn't laid much of the foundation of computer science at that time, and the limitations of both the theoretical foundations and physical hardware shaped the needs of the discipline and thus the curriculum. Today, Grady asserts that the real problems of our discipline are more about people than physical limits. Hardware is cheap. Every programmer can buy all the computing power she needs. The programmer's time, on the other hand, is still quite expensive.

What about ACM 2005? As an outsider, Grady says, good work! He likes the way the problem has been decomposed into categories, and the systematic way it covers the space. But he also asks about the reality of university curricula; are we really teaching this material in this way?

But he sees room for improvement and so offered some constructive suggestions for different ways to look at the problem. For example, the topical categories seem limited. The real world of computing is much more diverse than our curriculum. Consider...

Grady has worked with SkyTV. Most of their software, built in web-centric world, is less than 90 days old. Their software is disposable! Most of their people are young, perhaps averaging 28 years old or so.

He has also worked with people at the London Underground. Their software is old, and their programmers are old (er, older). They face a legacy problem like no other, both in their software and in their physical systems. I'm am reminded of my interactions with colleagues from Lucent, who work with massive, old physical switching systems driven by massive, old programs that no one person can understand.

What common theme do SkyTV and London Underground folks share? Building software is a team sport.

Finally, Grady looked at the ACM K-12 curriculum guidelines. He was so glad to see it, so glad to that see we are teaching the ubiquitous presence of computing in contemporary life to our young! But we are showing them only the fringes of the discipline -- the applications and the details of the OS du jour. Where do we teach them our deep ideas, the beauty and nobility of our discipline?

As he shifted into the home stretch of the talk, Grady pointed us all to a blog posting he'd come across called The Missing Curriculum for Programmers and High Tech Workers, written by a relatively recent Canadian CS grad working in software. He liked this developers list and highlighted for us many of the points that caught his fancy as potential modifications to our curricula, such as:

  • Sometimes, worker harder or longer won't get the job done.
  • Learn a scripting language!
  • Documentation is essential, but it must be tied to code.
  • Learn the patterns of organization behavior.
  • Learn about many other distinctly human elements of the profession, like meetings (how to stay awake, how to avoid them), hygiene (friend or foe?), and planning for the future.

One last suggestion for our consideration involved his Handbook of Software Architecture. There, he has created categories of architectures that he considers the genres of our discipline. Are these genres that our students should know about? Here is a challenging thought experiment: what if these genres were the categories of our curriculum guidelines? I think this is a fascinating idea, even if it ultimately failed. How would a CS curriculum change if it were organized exclusively around the types of systems we build, rather than mostly on the abstractions of our discipline? Perhaps that would misrepresent CS as science, but what would it mean for those programs that are really about software development, the sort of engineering that dominates industry?

Grady said that he learned a few lessons from his excursion into the land of computing curriculum about what (else) we need to teach. Nearly all of his lessons are the sort of things non-educators seem always to say to us: Teach "essential skills" like abstraction and teamwork, and teach "metaskills" like the ability to learn. I don't diminish these remarks as not valuable, but I don't think these folks realize that we do actually try to teach these, but they are hard to learn, especially in the typical school setting, and so hard to teach. We can address the need to teach a scripting language by, well, adding a scripting language to the curriculum in place of something less relevant these days. But newbies in industry don't abstract well because they haven't gotten it yet, not because we aren't trying.

The one metaskill on his list that we really shouldn't forget, but sometimes do, is "passion, beauty, joy, and awe". This is what I love about Grady -- he considers these metaskills, not squishy non-technical effluvium. I do, too.

During his talk, Grady frequently used the phrase "right and noble" to describe the efforts he sees folks in our industry making, including in CS education. You might think that this would grow tiresome, but it didn't. It was uplifting.

It is both a privilege and a responsibility, says Grady, to be a software developer. It is a privilege because we are able to change the world in so many, so pervasive, so fundamental ways. It is a responsibility for exactly the same reason. We should keep this mind, and be sure that our students know this, too.

At the end of his talk, he made one final plug that I must relay. He says that patterns are the coolest, most important thing that have happened in the software world over the last decade. You should be teaching them. (I do!)

And I can't help passing on one last comment of my own. Just as he did at OOPSLA 2005, before giving his talk he passed by my table and said hello. When he saw my iBook, he said almost exactly the same thing he said back then: "Hey, another Apple guy! Cool."


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 08, 2007 8:03 PM

SIGCSE Day 1: Media Computation BoF

A BoF is a "birds of a feather" session. At many conferences, BoFs are a way for people with similar interests to get together during down time in the conference to share ideas, promote an idea, or gather information as part of a bigger project.

Tonight I attended a BoF on the media computation approach I used in CS I last semester. The developers of this approach, Mark Guzdial and Barbara Ericson, organized the session, called "A Media Computation Art Gallery and Discussion", to showcase work done by students in the many college and high school courses that use their approach. You can access the movies, sounds, and video shown at this on-line gallery.

Keller McBride's color spray artwork

The picture to the right was an entry I submitted, an image generated by a program written by my CS I student Keller McBride. This picture demonstrates the sort of creativity that our students have, just waiting to get out when given a platform. I don't know how novel the assignment itself was, but here's the idea. Throughout the course, students do almost all of their work using a set of classes designed for them, which hide many of the details of Java image and sound. In one lab exercise, students played with standard Java graphics programming using java.awt.Graphics objects. That API gives programmers some power, but it has always seemed more complicated than is truly necessary. My 8-year-old daughter ought to be able to draw pictures, too! So, while we were studying files and text processing, I decided to try an assignment that blended files, text, and images. I asked my students to write an interpreter for a simple straight-line language with commands like this:

     line 10 20 300 400
     circle 100 200 50

The former draws a line from (10, 20) to (300, 400), and the latter a circle whose center point is (100, 200) and whose radius is 50.

This is the sort of assignment that lies right in my sweet spot for encouraging students to think about programming languages and the programs that process them. Even a CS I student can begin to appreciate this central theme of computing!

Students were obligated to implement the line and circle commands, and then to create and implement a command of their own choosing. I expected squares and ovals, which I received, and text, which I did not. Keller implemented something I never expected: a colorSpray command that takes a density argument and then produces four sprays, one from each corner. I describe the effect as shaking a paint brush full of color and watching the color disperse in an arc from the brush, becoming less dense as the paint moves away from the shaker.

This was a CS1 course, so I wasn't expecting anything too fancy. Keller even discounted the complexity of his code in a program comment:

* My Color Spray method can only be modified by how many arcs it creates, not really fancy, but I did write it from scratch, and I think it's cool.

I do, too. The code uses nested loops, one determinate and one indeterminate, and does some neat arithmetic to generate the spray effect. This is real programming, in the sense that it requires discovering equations to build a model of something the programmer understands at a particular level. It required experimentation and thought. If all my students could get so excited by an assignment or two each semester, our CS program would be a much better place.

At the BoF, one attendee asked how he should respond to colleagues at his university who ask "Why teach this? Why not just teach Photoshop?" Mark's answer was right on mark for one audience. Great artists understand their tools and their media at a deep level. This approach helps future Photoshop users really understand the tool they will be using. And, as another attendee pointed out, many artists bump up against the edges of Photoshop and want to learn a little programming precisely so they can create filters and effects that Adobe didn't ship in their software. The answer to this question for a CS or technical audience ought to be obvious -- students can encounter so many of the great ideas of computing in this approach; it motivates many students so much more than "convert Celsius to Fahrenheit"; and it is relevant to students' everyday experiences with media, computation, and data.

The CS education community owes Mark and Barb a hand for their work developing this wonderful idea through to complete, flexible, and usable instructional materials -- in two different languages, no less! I'm looking forward to their next step, a data structures course that builds on top of the CS 1 approach. We may have a chance to try it out in Fall 2008, if our faculty approve a third offering of media computation CS I next spring.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 08, 2007 4:36 PM

SIGCSE Day 1: Teaching Tips We Wish We'd Known...

Once again, things that could've been
brought to my attention yesterday!

-- Adam Sandler, in "The Wedding Singer"

After a long day driving and a short morning run, I joined the conference in time for the first set of parallel sessions. I was pleased to see a great way for me to start, a panel session called "Teaching Tips We Wish They'd Told Us Before We Started". That's a great title, but the list of panelists is like a Who's Who of big names from SIGCSE: Owen Astrachan, Dan Garcia, Nick Parlante, and Stuart Reges. These guys are reflective, curious, and creative, and one of the best ways to learn is to listen to reflective masters talk about what they do. The only shortcoming of the group is that they all teach at big schools, with large sections, recitations, and teaching assistants. But still!

The session was a staccato of one-minute presentations of teaching tips across a set of generic categories. Some applied to all of us: lecturing, writing exams, designing projects and homework assignments, and grading. Some applied only to those at big schools: managing staff and running recitation sections. Most were presented in an entertaining way.

(This is a bit long... Feel free at any time to skip down to the section on "meta" tips and be done.)

Lecturing

Some of these tips are ideas that I've found to work well in my own courses. I love to write code on the fly, and I've pretty much lost my reluctance to make a mistake in front of the students. Students are pretty good at understanding that everyone makes mistakes, and they like to see the prof make a mistake and then recover from it. They can learn as much from the reaction and recovery as from the particular code we write. Owen has long championed showing student code in class, but I have done so only under limited circumstances, such as code reviews of a big CS II program. I should try collecting some student code and using the document camera soon.

Nick's tips were more about behavior than practice. First, don't roll your eyes about any part of the course. If the instructor displays a bad attitude about some part of the curriculum, the student may well learn the same. What's worse in my mind is that students often learn to dismiss the instructor as limited in understanding or as a jingoistic advocate for particular ideas. That loss of respect diminishes the positive effect that an instructor can have on students.

My favorite tip from Nick was "a mistake I've made, recast as a tip": don't be lulled by your own outline. I've done this, too. I pullout the old lecture notes for the day, give them a quick scan, and think to myself, "Oh, yeah, I remember this." Then I go in the classroom and fall flat. Scanning your outline != real preparation. It is too easy to remember the surface feelings of a lecture without recapturing the details of the session, the depth of the topic, or the passion I need to make a case.

Dan's tips may reflect his relative youth. First, show video on the screen during the set-up time between sessions. He suggests videos from SIGGRAPH to motivate students to think beyond what they see in their intro courses to what computing can really be. Second, try podcasting your lectures. He pointed us to itunes.berkeley.edu, which allows you to go Apple's iStore and download podcasts of lectures from a variety of Berkeley's courses, among them his own CS 61C Machine Structures. Podcasting may sound more difficult to do than it is... One of my colleagues has experimented with a simple digital voice recorder and the free, open-source editing software Audacity. This approach doesn't produce sound studio-quality casts, but it does make your work available to students in a new channel. Dan gave one extra tip: think about what audiocasting means for how you present. "Like this" and "here" loose their value when students can't see what you are doing!

Perhaps the best quip from this part of the panel came from the omnipresent Joe Bergin. Paraphrase: "The more finely crafted and perfected your lecture, and the message that you want to communicate, the worse you will do." Amen. That perfect lecture is perfect for you, not your students. Go into class with an idea of two to communicate, have a plan, and then be prepared for whatever may happen.

Writing Exams

Lots of good advice here. We all eventually learn important lessons in exam writing by doing it wrong, like to do every problem before putting it on an exam (every problem -- no problem is so straightforward that you can't be wrong about it, and you will), and not to overestimate what students can do in 50 minutes (guilty as charged). But there were some nice nuggets that expose the idiosyncrasies of these good teachers:

  • Bring your laptop to the exam. (Dan) You can use it to display the time left in the period, or to display real-time corrections and clarifications.

  • Write the rubric for your question first. (Owen) You'll learn a lot about what the questions should be doing, which helps you to write it better. (If this isn't test-driven programming, I don't know what it is!)

  • When writing a CS1 exam, allocate 60% of the points to mechanical questions, 30% to problem solving, and 10% to a "super hard" question. (Stuart) The first part of the exam gives every student a chance to succeed through hard work and practice; the second gives all students to demonstrate the higher-order skills that go beyond mechanical tasks, and the third challenges and engages the best students without punishing those students who haven't reached that level of understanding yet.

  • Use "program mystery" questions. (Stuart) These questions are like mental calisthenics that ask students understand algorithmic ideas beyond the surface. I liked his example:

    What is the effect of the expression b = (b == false) for the boolean b?

  • Distribute sample questions. (Stuart) When you do, you'll get an extra week of learning out of your students. They take sample exams seriously enough to do them!

  • Omit the backstory. (Nick) The complex set-up may work on a homework assignment, where students have time to think and chance to ask for clarifications. But on an exam, the story is just cognitive overload, and as likely to interfere with student performance as to help.

Someone in the audience offered a practice I sometimes use. Let students take home their graded exam and correct their answers, and then give them 25% credit toward their grade. I usually only do this when exam scores are significantly lower than I or my students expected, as a way to build up morale a bit and to be sure that students have an incentive to plug the gaps in their understanding with a little extra work.

Designing Projects and Homework Assignments

Don't belittle student work. Even if you mean your comment as separate from person, students see the belittling of their work as belittling them. And definitely take care how you comment on student work in public! Work hard to provide constructive feedback.

Use real examples wherever possible. Watch for the real world "intruding" on old standards. Owen gave the example of his old, boring "count the occurrences of words in a text file" problem, which has now become hip in the form of sites such as TagCrowd.

Make every assignment due at 11:59 PM. This eliminates the sort of begging for extensions during the daytime that often leads to a loss of dignity, while giving students extra time in the evening when they have one last chance to work. But even if the students work right up to the deadline, they will still be "on cycle" for sleep. I've been in the habit of using 8:00 AM for my deadlines in recent semesters, but I think I might try midnight to see if I notice any effect on student behavior.

Do not assign too many math problems. "Math appeals to a certain tribe. You are probably in the tribe, so you don't 'hear the accent'.")

Grading

Use an absolute grading scale, with no curve. (I do that.) At the end of the semester, bump up grades at the margins if you think that gives a fairer grade. (I do that.) Never bump grades down. (I do that.) Allow performance on later exams to "clobber" poor performance on earlier exams. I do this, but with a squooshy "I'll take that into account" policy.

Stuart reminded us not to kill our CS1 students. If you are going to bump students' grades upward, do it right away. That takes away some of the uncertainty and fear associated with that grade, and they are already feeling enough uncertainty and fear. When you do mess up, say, with an exam that is too hard or too long (or both), be honest, admit your mistake, and make amends. Students will forgive almost anything, if you are honest and fair.

From the audience, Rich Pattis offered a bit of symmetry on grading grace periods. Many instructors allow students to submit assignments late, with a penalty. Rich suggests that we offer bonuses for early submission. In his experience, even a small bonus encourages many students to start doing their work earlier. Implicit in this suggestion is that when they don't get done early for the bonus, because they need more time to get it, they have more time to get it! I'm adding this to my toolbox effective today.

I won't say much about the tips for managing staff and designing recitation sections. But the big lessons here were to value people. Undergraduates are an underutilized yet powerful resource. (We've found this in our CS I, II, and III labs, too.) Empower your TAs. Be human with them.

Going Meta

The last section was general advice, going meta. Most teachers in most disciplines can learn from these tips science.

Team teach. An instructor can learn so much from working with another teacher and watching another teacher work. Even better is to debrief with one another. (If this isn't pair programming, I don't know what it is!)

Make notes as you learn how to teach your course. Keep a per-lecture diary of ideas that you have in class. I have been doing this for a decade or so. When I leave a session with a new idea or a feeling that I could have done better, I like to make an entry in a log of ideas, tagged with the session number, or the assignment or exam number. The next time I teach the course, I can review this log when prepping the course, to help me plan ahead for last time's speed bumps, and then use it when prepping the session or assignment in question during the semester.

The rest of these meta tips are from Owen Astrachan. I admire his concrete ideas about teaching, and his abstract lessons as well.

Stop being driven by the need to cover material. Your students won't remember most of the details anyway. How many details do you remember from any particular course you took as an undergrad? Breaking this habit frees you to do more.

Owen gave three tips in the same vein. Know your students' names. Eat where your students eat occasionally. Cultivate relationships with non-CS faculty. The vein: People matter, and being a person makes you a better teacher.

Learn something new: a whole new area, such as computational genomics; a new hot topic in computing, such as Ruby on Rails or Ajax; or even a relatively little technique you don't already know, such as suffix arrays. Learning new things reminds you what it's like to learn something new, which is where your students are every day. Learning new things lets you infuse your teaching with fresh ideas, maybe ideas that are more relevant to today's students. Learning new things lets you infuse your teaching with fresh attitude, which is an essence of good teaching.

----

From this panel I can conclude at least two things. One is that I'm getting old... I've been teaching CS long enough to have discovered many of these practices in my own classroom, often from my students. The other is that I still have plenty to learn. Some of these ideas may show up in one of my courses soon!

If you'd like to learn more of these teaching tips, or add your own to the collective wisdom, check out the community wiki via this link.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 21, 2007 5:50 PM

ChiliPLoP 2007 Redux

Our working group had its most productive ChiliPLoP in recent memory this year. The work we did isn't ready for public consumption yet, so I can't post a link just yet, but I am hopeful that we will be able to share our results with interested educators soon enough. For now, a summary.

This year, we made substantial progress toward producing a well-documented resource for instructors who want to teach Java and OOP. As our working paper begins:

The following set of exercises builds up a simple application over a number of iterations. The purpose is to demonstrate, in a small program, most of the key features of object-oriented programming in Java within a period of two to three weeks. The course can then delve more deeply into each of the topics introduced, in whatever order the instructor deems appropriate.

An instructor can use this example to lay a thin but complete foundation in object-oriented Java for an intro course within the first few weeks of the semester. By introducing many different ideas in a simple way early, the later elements of the course can be ordered at the instructor's discretion. So many other approaches to teaching CS 1 create strict dependencies between topics and language constructs, which limits the instructor's approach over the course of the whole semester. The result is that most instructors won't adopt a new approach, because they either cannot or do not want to be tied down for the whole semester. We hope that our example enables instructors to do OO early while freeing them to build the rest of their course in a way that fits their style, their strengths and interests, and their institution's curriculum.

Our longer-term goal is that this resource serve as a good example for ourselves and for others who would like to document teaching modules and share them with others. By reducing external dependencies to a minimum, such modules should assist instructors in assembling courses that use good exercises, code, and OO programming practice.

... but where are the patterns? Isn't a PLoP conference about patterns? Yes, indeed, and that is one reason that I'm more excited about the work we did this week than I have been in a while. By starting with a very simple little exercise, growing progressively into an interesting simulation via short, simple steps, we have assembled both a paradigmatic OO CS 1 program and the sequence of changes necessary to grow it. To me, this is an essential step in identifying the pattern language that generates the program. I may be a bit premature, but I feel as if we are very close to having documented a pattern sequence in the Alexandrian sense. Such a pattern sequence is an essential part of a pattern-oriented approach to design, and one that only a few people -- Neil Harrison and Jim Coplien -- have written much about. And, like Alexander's idea of pattern diagnosis, pattern sequences will, I think, play a valuable role in how we teach pattern-directed design. My self-assigned task is to explore this extension of our ChiliPLoP work while the group works on filling in some details and completing our public presentation.

One interesting socio-technical experiment we ran this week was to write collaboratively using a Google doc. I'm still not much a fan of browser-based apps, especially word processors, but this worked out reasonably well for us. It was fast, performed autosaves in small increments, and did a great job handling the few edit conflicts we caused in two-plus days. We'll probably continue to work in this doc for a few more weeks, before we consider migrating the document to a web page that we can edit and style directly.

Two weeks from today, the whole crew of us will be off to SIGCSE 2007, which is an unusual opportunity for us to follow up our ChiliPLoP and hold ourselves accountable for not losing momentum. Of course, two weeks back home like this would certainly wipe my mind clear of any personal momentum I have built up, so I will need to be on guard!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 20, 2007 11:58 PM

A State Conformable to Nature

In the beginning, I blogged on the danger of false pride and quoted the Stoic philosopher Epictetus. Now I have encountered another passage from Epictetus, at the mind hacker site Hacks

When you are going about any action, remind yourself what nature the action is. If you are going to bathe, picture to yourself the things which usually happen in the bath: some people splash the water, some push, some use abusive language, and others steal. Thus you will more safely go about this action if you say to yourself, "I will now go bathe, and keep my own mind in a state conformable to nature." And in the same manner with regard to every other action. For thus, if any hindrance arises in bathing, you will have it ready to say, "It was not only to bathe that I desired, but to keep my mind in a state conformable to nature; and I will not keep it if I am bothered at things that happen."

The notion of a "state conformable to nature" was central to his injunction against false pride, and here it remains the thoughtful person's goal, this time in the face of all that can go wrong in the course of living. This quote also resonates with me, because, just as I am inclined toward a false pride, I have a predisposition toward letting small yet expectable hindrances interfere with my frame of mind. Perhaps predisposition is the wrong word; perhaps it's just a bad habit.

As is often the case for me, after a second or third positive reference is all I need to commit to reading more. In the coming weeks, I now plan to read the Discourses of Epictetus. We even have a copy on our bookshelf at home. (My wife's more classical education proves useful to me again!)


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

February 19, 2007 12:03 AM

Being Remembered

Charisma in a teacher is not a mystery or a nimbus of personality, but radiant exemplification to which the student contributes a corresponding radiant hunger to becoming.

-- William Arrowsmith

On a day when I met with potential future students and their parents, several former students of mine sent me links to this xkcd comic:

xkcd -- my god, it's full of cars

I laughed out loud twice while reading it, first at the line "My god, it's full of cars." and then at the final panel nod at a postmodern god. The 'cars' line was a common subject line in the messages I received.

As these messages rolled in, I also felt a great sense of honor. Students whom I last saw two or seven or ten years ago thought enough of their time studying here to remember me upon reading this comic and then send me an e-mail message. All studied Programming Languages and Paradigms with me, and all were affected in some lasting way by their contact with Scheme. One often hears about how a teacher's effect on the world is hard to measure, like a ripple on the water sent out into the future. I am humbled that some really neat people out in this world recall a course they took with me.

Of course, I know that a big part of this effect comes from the beauty of the ideas that I am fortunate enough to teach. Scheme and the ideas it embodies have the power to change students who approach it with a "radiant hunger to becoming". I am merely its its channel. That is the great privilege of scholars and teachers, to formulate and share great ideas, and to encourage others to engage the ideas and make them grow.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 16, 2007 8:53 PM

A Partial Verdict

I think the verdict is in on the nature of my students this semester: The class is a mixed bag. Many students are quietly engaged. The rest are openly skeptical. It's not a high-positive energy crowd that can drive a semester forward, but I think there is a lot of hope for where many of these students can end up. The rest will likely prefer merely to survive to the end of the course and then move on to what really interests them. I hope that all can enjoy at least part of the ride.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 02, 2007 6:13 PM

Recursing into the Weekend

Between meetings today, I was able to sneak in some reading. A number of science bloggers have begun to write a series of "basic concepts" entries, and one of the math bloggers wrote a piece on the basics of recursion and induction. This is, of course, a topic well on my mind this semester as I teach functional programming in my Programming Languages course. Indeed, I just gave my first lecture on data-driven recursion in class last Thursday, after having given an introduction to induction on Tuesday. I don't spend a lot of time on the mathematical sort of recursion in this course because it's not all that relevant to to the processing of computer programs. (Besides, it's not nearly as fun!)

This would would probably make a great "basic concepts in CS" post sometime, but I don't have time to write it today. But if you are interested, you can browse my lecture notes from the first day of recursive programming techniques in class.

(And, yes, Schemers among you, I know that my placement of parentheses in some of my procedures is non-standard. I do that in this first session or so so that students can see the if-expression that mimics our data type stand out. I promise not to warp their Scheme style with this convention much longer.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

January 25, 2007 4:52 PM

It's Not About Me

Our dean is relatively new in his job, and one of the tasks he has been learning about is fundraising. It seems that this is one of the primary roles that deans and other to-level administrators have to fill in these days of falling state funding and rising costs.

This morning he gave his department heads a short excerpt from the book Asking: A 59-Minute Guide to Everything Board Members, Volunteers, and Staff Must Know to Secure the Gift. Despite the massive title, apparently each chapter of the book is but a few chapters, focused on a key lesson. The excerpt we received is on this kernel: Donors give to the magic of an idea. Donors don't give to you because you need money. Everybody needs money. Donors give because there is something you can do.

For some reason, this struck as a lesson I have learned over the last few years in a number of different forms, in a number of different contexts. I might summarize the pattern as "It's not about me." Donors don't give because I need; they give because I can do something, something that matters out there. In the realm of interpersonal communication, the hearer is the final determinant of what is communicated. Listeners don't hear what I say; they hear what they understand me to have said. The blog Creating Passionate Users often talks about selling how my book or software is about empowering my users -- not about me, or any of the technical things that matter to me. The same applies to teachers. While in an important sense my Programming Languages course is about the content we want students to learn, in a very practical sense the course is about my students: what they need and why, how they learn, and what motivates them. Attending only to my interests or to the antiseptic interests of the "curriculum" is a recipe for a course almost guaranteed not to succeed.

Let's try this on. "Students don't learn because you think they need something. They learn because there is something they can do with their new knowledge." They learn because the magic of an idea.

That sounds about right to me. Like any pattern taken too far in a new context, this one fails if I overreach, but it does a pretty good job of capturing a basic truth about teaching and learning. Given the many forms this idea seems to take in so many contexts, I think it is just the sort of thing we mean by a pattern.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Patterns, Teaching and Learning

January 23, 2007 8:09 AM

Class Personality and New Ideas

People you've got the power over what we do
You can sit there and wait
Or you can pull us through
Come along, sing the song
You know you can't go wrong

-- Jackson Browne, "The Load-Out"

Every group of students is unique. This is evident every time I teach a course. I think I notice these differences most when I teach a course Programming Languages, the course I'm teaching this semester.

In this course, our students learn to program in a functional style, using Scheme, and then use their new skills to build small language interpreters that help them to understand the principles of programming languages. Because we ask students to learn a new programming style and a language very different from any they know when the enter the course, this course depends more than most on the class's willingness to change their minds. This willingness is really attribute of each of the individuals in the class, but groups tend to develop a personality that grows out of the individual personalities which make it up.

The last time I taught this course, I had a group that was eager to try new ideas. These folks were game from Day 1 to try something like functional programming. Many of them had studied a language called Mumps in another course, which had shown them the power that can be had in a small language that does one or two things well. Many of them were Linux hackers who appreciated the power of Unix and its many little languages. Scheme immediately appealed to these students, and they dove right in. Not all of them ended up mastering the language or style, but all made a good faith effort. The result was an uplifting experience both for them and for me. Each class session seemed to have a positive energy that drove us all forward.

But I recall a semester that went much differently. That class of students was very pragmatic, focused on immediate skills and professional goals. While that's not always my orientation (I love many ideas for their own sake, and look for ways to improve myself by studying them), I no longer fault students who feel this way. They usually have good reasons for having developed such a mindset. But that mindset usually doesn't make for a very interesting semester studying techniques for recursive programming, higher-order procedures, syntactic abstractions, and the like. Sure, these ideas show up -- increasingly often -- in the languages that they will use professionally, and we can make all sorts of connections between the ideas they learn and the skills they will need writing code in the future. It's just that without a playful orientation toward new ideas, a course that reaches beyond the here-and-now feels irrelevant enough to many students to be seem an unpleasant burden.

That semester, almost every day was a chore for me. I could feel the energy drain from my body as I entered the room each Tuesday and Thursday and encountered students who were ready to leave before we started. Eventually we got through the course, and the students may even have learned some things that they have since found useful. But at the time the course was like a twice-weekly visit to the dentist to have a tooth pulled.

In neither of these classes was there only the one kind of student. The personality of the class was an amalgam, driven by the more talkative members or by the natural leaders among the students. In one class, I would walk into the room and find a few of them discussing some cool new thing they had tried since the last time we met; in the other, they would be discussing the pain of the current assignment, or a topic from some other course they were enjoying. These attitudes pervaded the rest of the students and, at least to some extent, me. As the instructor, I do have some influence over the class's emotional state of mind. If I walk into the room with excitement and energy, my students will feel that. But the students can have the same effect. The result is a symbiotic process that requires a boost of energy from both sides every class period.

We are now two weeks into the new semester, and I am trying to get a feel for my current class. The vocal element of the class has been skeptical, asking lots of "why?" questions about functional programming and Scheme alike. So far, it hasn't been the negative sort of skepticism that leads to a negative class, and some of the discussion so far has had the potential to provoke their curiosity. As we get deeper into the meat of the course, and students have a chance to write code and see its magic, we could harness their skepticism into a healthy desire to learn more.

Over the years, I've learned how better to respond to the sort of questions students ask at the outset of the semester in this course. My goal is to lead the discussion in a way that is at the same time intellectually challenging and pragmatic. I learned long ago that appealing only to the students' innate desire to learn abstract ideas such as continuations doesn't work for the students in my courses. In most practical ways, the course is about what they need to learn, not about what I want blather on about. And as much as we academics like papers such as Why Functional Programming Matters -- and I do like this paper a lot! -- it is only persuasive to programmers who are open to being persuaded.

But I've also found that pandering to students by telling them that the skills they are learning can have an immediate effect on their professional goals does not work in this sort of course. Students are smart enough to see that even if Paul Graham got rich writing ViaWeb in Lisp, most of them aren't going to be starting their own companies, and they are not likely to get a job where Scheme or functional programming will matter in any direct way. I could walk into class each day with a different example of a company that has done something "in the real world" with Scheme or Haskell, and at the end of the term most students would have perceived only thirty isolated and largely irrelevant examples.

This sort of course requires balancing these two sets of forces. Students want practical ideas, ideas that can change how they do their work. But we sell students short when we act as if they want to learn only practical job skills. By and large they do want ideas, ideas that can change how they think. I'm better at balancing these forces with examples, stories, and subtle direction of classroom discussion than I was ten or fifteen years ago, but I don't pretend to be able to predict where we'll all end up.

Today we begin a week studying Scheme procedures and some of the features that make functional programming different from what they are used to, such as first-class procedures, variable arity, higher-order procedures, and currying. These are ideas with the potential to capture the students' imagination -- or to make them think to themselves, "Huh?" I'm hopeful that we'll start to build a positive energy that pulls us forward into a semester of discovery. I don't imagine that they'll be just like my last class; I do hope that we can be a class which wants to come together a couple of times every week until May, exploring something new and wonderful.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

January 19, 2007 3:45 PM

Three Notes for a Friday

Three thoughts from recent reading. From a professional standpoint, one is serious, one is light, and one is visionary. You'll have to decide.

Understanding the New

In their New Yorker article Manifold Destiny, on the social drama that surrounded Grigory Perelman's proof of the Poincaré conjecture, Sylvia Nasar and David Gruber write:

On September 10, 2004, more than a year after Perelman returned to St. Petersburg, he received a long e-mail from Tian, who said that he had just attended a two-week workshop at Princeton devoted to Perelman's proof. "I think that we have understood the whole paper," Tian wrote. "It is all right."

Mathematics, the theoretical parts of computer science, and theoretical physics are different from most other fields of human knowledge in a way that many folks don't realize. The seminal advances in these areas create a whole new world. Even the smartest, most accomplished people in the field may well not understand this new world until they have invested a lot of time and effort studying the work.

This should encourage those of us who don't understand a new idea immediately the first time through.

Dressing the Part

In Why I Don't Wear a Suit and Can't Figure Out Why Anyone Does!, America's most X-generation sports owner, Mark Cuban, writes:

Someone had once told me that you wear to work what your customers wear to work. That seemed to make sense to me, so I followed it, and expected those who worked for me to follow it as well.

This is good news for college professors. If you believe the conventional wisdom these days, our customers are our students, and their dress is the ultimate in casual and comfortable. I can even wear shorts to class with little concern that my students will care.

But what about all of our other customers -- parents, the companies that hire our students, the state government, the taxpayers? They generally expect something more, but even still I think that academics are unique among professionals these days in that almost everyone cuts us slack on how we dress. Or maybe no one thinks of us as professionals...

Now that I am a department head, I have made adjustments in how I dress, because my audience really is more than just my students. I meet with other faculty, higher-level administrators, and members of the community on a regular basis, and where they have expectations I try to meet or exceed them. Cuban is basically right, but you have to think of "customer" in a broader sense. It is "whoever is buying my immediate product right now", and your product may change from audience to audience. The dean and other department heads are consuming a different product than the students!

Controlling the Present and Future

Courtesy of James Duncan Davidson, another quote from Alan Kay that is worth thinking about:

People who are really serious about software should make their own hardware.

Alan Kay has always encouraged computer scientists to take ownership of the tools they use. Why should we settle for the word processor that we are given by the market? Or the other basic tools of daily computer use, or the operating system, or the programming languages that we have been handed? We have the power to create the tools we use. In Kay's mind, we have the obligation to make and use something better -- and then to empower the rest of the users, by giving them better tools and by making it possible for them to create their own.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 15, 2007 11:45 AM

Getting Worse in Order to Get Better

Tiger Woods drives a ball

Tiger Woods recently won the PGA's Player of the Year for the eighth time in his ten-year professional career. Since 1990, no other player has won the award more than twice. He is widely considered the most dominant athlete in any sport in the world -- which is saying a lot when you consider the amazing runs that tennis's Roger Federer and cycling's Lance Armstrong have had during Tiger's own run.

Woods is great, but he also stands out for something else: his remarkable efforts to get better. Now, most pros are continually working on the games, trying to improve their skills. Woods has taken this effort to a new level, by completely rebuilding his swing twice during his professional career. Sportswriter Leonard Shapiro describes Woods's most recent reconstruction, back in 2004 and early 2005. In 2004, a lot of commentators thought that Tiger had perhaps peaked, as other players on the Tour had been winning the majors and left him as just another competition. Tiger wasn't striking the ball as well, and his tee shots were errant. He seemed to have gotten worse. But suddenly in 2005, he returned to the top of the leaderboard with a vengeance and had one of the all-time great years in PGA history.

You see, while Tiger was seemingly getting worse, he was actually getting better.

A golf swing is complex mechanical act. There is very little margin of error between a great shot and a merely good shot. At the top level of golf, the standard deviation is even smaller. When Tiger decided that he had reached a plateau in his game with his current swing, he knew that he had to develop an entirely new swing. And while he was building that new swing, using it on every shot for nearly eighteen months, he performed worse than he had with the old swing. Only after all that repetition, feedback, and adjustment did he have the swing he needed to regain his peak. And his new peak was even higher than the old one.

This progression from plateau to valley to new peak is not unique to Tiger or to golf swings. Any complex skill that depends on muscle memory requires the sort of repetition and feedback that usually results in degraded performance during the learning phase. The key to improvement in this phase is patience. Learning a new skill takes time, while we train our brains and bodies to execute their tasks in an accurate, repeatable way.

Mental skills, even ones that we carry out with more conscious attention, have this feature. Sometimes, we can make only small incremental improvements from our current skill base, but an effort to learn something radically different can alter our skill base in a qualitative way -- and result in radical improvements in our performance.

Many programmers know this. The last couple of years, a common new year meme among bloggers has been to learn a new language. Often the language is something very different from their daily tools of Java and XML and C. Haskell, Ruby, Scheme, and Smalltalk seem to show up on peoples' lists frequently precisely because they are so different. They offer the promise of a radical improvement in skill because to master them requires looking at problems and solutions in a radically different. You can't speak fluid Haskell or Scheme without coming to grips with a functional mindset. Even if list comprehensions, continuations, and tail recursion are not part of the programming language you use in your day job, understanding them can help you use that language in a new way. And who knows, those features may ultimately make their way into your day job's language -- either this one, or the next one.

Martin Fowler writes about his own experience crossing the improvement ravine on the way to new mastery. He even quotes Gerald Weinberg, whom I've mentioned occasionally since I first began blogging. Martin points out a couple of key insights. One is that sometimes the new thing doesn't work, at least for you. Worse, there is a Halting Problem complicating matters: you can't be sure if the technique has failed for you or if you just need to stick with it a little longer. The best hope for circumventing this problem is to have a good teacher working with you, whether in a classroom or one one one. Tiger had his coach, Hank Haney, to help him assess the state of his swing and decide to keep going, even during the darkest days of 2004. Working with colleagues or a trusted friend can also serve this purpose.

I think another key to this process is a sort of courage. It's hard to be patient while you're failing, or while you're struggling with a new idea. In this context, your teacher coach, or friend plays the important role of support system, encouraging you to stick with the process. As with almost anything, doing it over and over helps us to have the courage we need. Tiger's 2004 rebuild was the second such publicized episode of his career, and I'm guessing that having succeeded in the first helped him to remain steadfast during the second. One requires less courage as one feels less fear. But I think that I will always feel a real fear anytime I step way outside my expertise in an effort to get better. Maybe even a great one such as Tiger does, too. Courage will always play a role.

This notion of getting worse for a while in order to get better is on my mind right now because I have just begun a new semester in which I will try to teach Scheme and functional programming to a bunch of students who probably feel pretty comfortable in their imperative programming skills with Java, C++, and Ada. I have to help them see that mastering such a different new language, and especially style of programming, will require that they feel awkward for a while. The tried-and-true syntax, operators, idioms, and patterns no longer seem to work. That is scary. But it's worth going through this scary phase, practicing "the real thing" as much as they can. With practice and time, they will soon learn the new syntax, master the new operators, appreciate the new idioms and develop some of their own, and finally discover the new patterns that will make them better programmers than they were before.

My memory is always drawn back to two former students in particular who approached this in a Tiger-like fashion. They went home every night for three weeks or so and just tried to write good functional programs in Scheme. They read my examples and those from the textbook; they experimented with small modifications and then large ones, eventually trying to write whole programs. For a while this was painful, because their programs broke and my evaluation of their homework didn't result in the easy As to which they had become accustomed. But suddenly -- or so it seemed -- they got it. They were excellent functional programmers. The rest of their Programming Languages course went smoothly, as they could focus on the language concepts without having to worry about whether they could make Scheme work for them.

If the greatest golfer in the world, the most dominant athlete of a generation, can take the time to get worse for a while in order to get better, I think we all can. But do we have the patience and courage to take the chance and stay the course? And do we have the teachers and coaches who can help us along the way?


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 12, 2007 6:14 PM

The Verdict Is In

You may recall a risk I took a few weeks back -- I returned an exam to my CS 1 student on which the grades were disappointing to students, and me, and then later in the period asked them to assess the course and my teaching. No one can say that I tried to stack the deck, either by action or inaction!

I received my feedback today, and I all I can say is that the trust I placed in my students' judgment was well-deserved. The assessments were mostly positive -- actually, as good as I've ever had. Given the situation, I doubt that they were artificially high, and I can only hope that they reflect what my students thought. If they do, then the course was a success, both the media computation approach and my implementation of it. That is good news for the students, and for our department, which could use a solid cadre of new majors and minors moving through our program.

One question remains: Are these students prepared well enough for their subsequent courses? That is the ultimate criterion for success, and we won't know that until they have taken a few more courses. I'll be keeping my eyes open to their performance in coming terms.

Another question remains: Will the media computation approach succeed in other instructors' hands, or even in my hands after the initial rush of excitement I had teaching it the first time. The approach is in use at several other schools, so there is some evidence independent of our institution. We'll see how things go in coming semesters, with other instructors here trying the approach. (It's in place for at least one more semester, this one.)

I'm also curious to see how using the Python in the course, or some other lighter-weight interactive language. I'm not sure when, if ever, that might happen here. Curriculum, especially the first-year curriculum, is a hot potato in my department. But I think Ruby or Python might be a great way to appeal to an even broader audience, without losing the hard-core CS-leaning students.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 22, 2006 5:10 PM

I'm Not Being Funny Here

If you surveyed my students, I doubt that "funny guy" is one of the first phrases they would use to describe me. It's not that that I am humorless in the classroom; it's just that comedian isn't a natural part of my public persona. (At home, though, I am probably sillier than the average professor.) But I do try to take advantage of opportunities to have fun in class, whether with the material itself or with the many circumstances that arise whenever people -- and technology -- come together. I think that this sort of humor can be used to good effect in the classroom by any instructor. Of course, as with any skill, it takes some practice before most become very good at it.

Whether we are natural comedians or not, I think we can all learn something from comedians. I was recently reading an article by a guy who had taken a class on how to be a stand-up comic. The way these guys learned the craft was largely by trial-and-error: They wrote a short act and then presented it in front of their teacher and classmates. The criticism could be brutal, but the feedback helped them learn lessons about writing and delivery. We learn to programming in much the same way, but we can take one of our best critics wherever we go: the compiler. I think we could help our students learn a lot more about programming and software design by adopting the stand-up approach more often: having students present their code to instructors and fellow students, to receive feedback from real people. This is a natural part of a studio approach to instruction, which can be adapted to classes of different sizes in a variety of ways. Jim Waldo of Sun talked about the role of one-one mentorship of this sort in his Essays presentation at OOPSLA 2006.

But that article also suggested another way that we could learn from comics. The author asked Eddie Brill, who warms up David Letterman's Late Show audiences, what all the great comedians had in common. He answered, "They're honest, they're vulnerable, and they're not looking for approval." I think that two of these characteristics are no-brainers as essential to good teaching.

An instructor must be honest. Students sense insincerity better than many people realize, and they do not like being misled in the course of their learning. This might seem difficult, considering that most CS instruction requires simplification in order to teach most concepts. If I feel I have to explain all of the subtleties of public, protected, and private when teaching Java to CS 1 students, I will lose them before they can really appreciate how much fun programming is. But I don't think being honest means dumping details on students unfiltered or unstructured by our expertise; it means not teaching them facts as dogma that must necessarily be undercut later. I try not to overstate the reach of the rules I teach my students, and wherever possible I let them know that they are learning a simplification that we will enrich later. College students -- and I think younger kids, too -- are savvy enough to understand these distinctions, and they appreciate being involved in the unfolding of knowledge.

An instructor should never enter a classroom seeking approval. My students are not in my class to validate me, or to make me feel better about myself, or to boost my confidence. They're there to learn, to become part of a scholarly community. If I enter the room seeking approval, it will distort everything I do, from the material I choose to teach, to how I teach it, to how I evaluate their progress. The class isn't about me, it's about them -- or perhaps more accurately, about us and a set of ideas and skills, as we help them grow in what they know and can do. A comic who reaches out for the laugh will almost always be left wanting. Instructors reach out for the applause will either be left wanting by a disappointed class or will leave the room with a hollow and fleeting victory, having cheated the students.

That brings me to the third characteristic: vulnerability. Doesn't this contradict the previous advice not to seek approval? Even if it doesn't, should an instructor really be vulnerable? I don't think that being vulnerable is a contradiction at all, which is why the best comics can have both characteristics. Vulnerability is about being invested in the process, about caring what happens, about being open rather than closed. Even when considered this way, I know that some of my colleagues would disagree with the assertion that a CS instructor ought to be vulnerable. "That's one way to teach, I suppose," they would say, with almost but not quite a sneer in their voices. "But my style is built on authority, not vulnerability." I think I'm willing to concede that one can teach effectively without being vulnerable in a touchy-feely sense, but I'm not willing to concede that the best instructors are ultimately vulnerable in the sense that they are invested in their students' learning, that they care deeply about what their students come to know. Among the gruffest doubters of my assertion, the ones who are good teachers have this trait. And I know that I certainly do, or I would never feel like this -- or like this.

Now I know that I can watch Seinfeld and All in the Family with a clean conscience -- they can help me get better in the classroom.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 21, 2006 2:49 PM

User Documentation and Instructional Design

Yesterday while clearing out some of the paper that has built up in my office over the course of the semester, I was reading through the conference copy of several sets of OOPSLA tutorial notes. One that grabbed by interest was from a tutorial that we had to cancel for lack of registration, on how to write user guides and tutorial documents. OOPSLA caters to software developers, and we knew at the time we accepted this one that the audience might be slim. But as I thought about the notes I was sad that we didn't attract more of an audience for this tutorial. It would have been useful to a lot of folks.

I was struck by the fact that writers of documentation for users face many of the same issues that teachers face. The first part of the tutorial dealt with users and learning: what they need to learn, and how they can learn it. The writer needs to remember that the user doesn't see the system in the same way as the developer, and may in fact be a different sort of person entirely. This echoes the recent "our students aren't like us" theme in several of my posts. The tutorial then proceeds to give concrete advice on the such topics as:

  • the differences between learning by reading and learning by doing
  • the cognitive burden shouldered by learners
  • the distinction between dedicates learners and midtask learners

I think that I sometimes assume that my students will be dedicated learners, focused on the ideas that I am trying to convey, rather than midtask learners, focused on getting something done. But I suspect that many students do a lot of their learning just-in-time, while attempting a programming assignment or homework problem. Midtask learners approach the learning task differently than the dedicated learner. In particular, they tend to look for what they need right now and stop reading as soon as they find it -- or realize that they won't! This makes brevity and specificity important elements of user documentation. They are just as important when writing instructions and tutorials for students.

The tutorial goes on to give concrete advice and examples on how to write instructions, how to induce rehearsal in the learner, and how to organize presentation to avoid overloading the learner. Almost every page of the notes has something to use as I think about refining my spring Programming Languages and Paradigms course. I've written extensive lecture notes for this course, of which I'm proud. But I think I'll use some of my prep time in the coming term to apply the ideas from this tutorial to my lectures. I can think of a couple ways to improve them:

  • varying the strategies I use to invoke rehearsal (zooming and out, changing modes of presentation, and supporting a new assertion with what we just learned)

  • making sure that my instructions clearly communicate their intention, endpoint, time frame, and possible signs o success and failure.

I guess I am not surprised by the similarities among writing user doc and writing for students (and teaching from that writing), but it never occurred to me to mine the former to help me improve the latter.

These tutorial notes were fun to read even without having the presenter in the room. They were written well, spare but engaging. That said, as with most printouts from slide presentations, I would have learned a lot more by having the writer tell the stories that were abstracted into her slides. And I definitely would like to see the examples that she had planned to distribute to illustrate the ideas in the tutorial.

User documentation is certainly not the only other writing form from which instructors can draw ideas and inspiration. Nat Pryce recently wrote about the idea of using the comic book as a form for end-user documentation, and there may be something we instructors could learn from comic book writers. Before you scoff, recall that the U.S. 9/11 Commission Report The Path to 9/11 was adapted into a highly-acclaimed comic book. (If I recall correctly, either artist Ernie Colon or writer Sid Jacobson is from my adopted state of Iowa.)

I think the OOPSLA crowd missed a good opportunity to learn from this tutorial. But there is one consolation... I believe that the tutorial presenter is currently writing a book on this topic. I'll definitely pick up a copy, and not because I plan to write a lot of user documentation.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 18, 2006 4:00 PM

Another Way Our Students Are Different From Us

On my CS 1 final exam, I asked the students to suggest a way that we could hide a string in a sound, where a Sound object is encoded as an array of sound samples in the range [-32768..32767]. Here is the last part of the question:

Under your scheme, how many characters could you hide in a song that is 3:05 long?

My caricature of the typical answer is...

It depends.

Actually, most of the students were a bit more specific:

It depends on the sampling rate.

Of course it does. I was looking for an answer of this sort:

(185 * sampling rate) / 100

... for a scheme that stored one character every 100 sound samples. It never occurred to me that most students would get as far as "It depends on the sampling rate." and just stop. When they realized that they couldn't write down as answer such as "42", they must have figured they were done thinking about the problem. I've been doing computer science so long, and enjoying math for much longer, that I naturally wanted to write a formula to express the result. I guess I assumed the students would want to do the same! This is yet another example of how our students are different from us.

Fortunately, nearly all of them came up with a scheme for hiding a text that would work. Some of their schemes would degrade the sound we hear considerably, but that wasn't the point of the question. My goal was to see whether they could think about our representations at that level. In this, they succeeded well enough.

Well, I guess that depends.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 16, 2006 2:25 PM

The Prospect of Writing a Textbook

Writing a book is an adventure. To begin with, it is
a toy and an amusement; then it becomes a mistress, and
then it becomes a master, and then a tyrant. The last
phase is that just as you are about to be reconciled to
your servitude, you kill the monster, and fling him out
to the public.

-- Winston Churchill

So maybe I think of myself as a writer, at least part of the time. Why haven't I written a book -- especially a textbook -- yet? My wife often asks when my book will be ready. She would like to see all the work I've done, both solo and with my many friends and colleagues, to be more visible in the world. My parents asks occasionally, too, as do some other colleagues. For many folks, a book is "the" way to make one's ideas usable by others. Papers, conference presentations, and a blog aren't enough.

To be fair my wife and many others who ask, I have been working for several years with colleagues on new ways to think about teaching first-year CS courses, motivated by real problems and pattern languages. We have progressed in spurts and sputters, as we learn more about what we are trying to do and as we try to make time to do the hard work of developing something new. I have learned a lot from this collaboration, but we haven't produced a book yet.

I suppose that I just haven't found the project that makes me write a book yet. A book project requires a huge amount of work, and I suppose that I need to really want to write a book to take on the yoke.

Other than my doctoral dissertation ("Conceptual Retrieval from Case Memory Based on Problem-Solving Roles") and several long-ish papers such as a master's thesis ("Temporal Logic and its Use in the Symbolic Verification of Hardware" -- written in nroff, no less!), I have never written a large work. But I occasionally sense what it must be like to write a textbook, a monograph, or even a novel. When I am at my most productive producing ideas and words, I see common triggers and common threads that tie ideas together across time and topic. When I am blogging, these connections manifest themselves as links to past entries and to other work I've done. (Due to the nature of blogging, these links are always backwards, even though my notes often remind me to foreshadow something I intend to write about later and then to link back to the current piece.) However, I know that when I have blogged on a topic I've only done the easy part. Even when I take the time to turn a stream-of-consciousness idea into a reasonably thoughtful piece for my blog, the hard work would come next: honing the words, creating a larger, coherent whole, making something that communicates a larger set of ideas to a reader who wants more than to drop occasionally into a blog to hear about a new idea. I don't think I fear this work, though I do have a healthy respect for it; I just haven't found the One Thing that makes me think the payoff would be worth the hard work.

The closest thing to a textbook that I have written are the lecture notes for my Programming Languages course. They are relatively complete and have been class-tested over several years. But these are largely a derivative work, drawing heavily on the text Essentials of Programming Languages and less heavily on a dozen other sources and inspiration. Over time, they have evolved away from dependence on those sources, but I still feel that my notes are more repackaging than original work. Furthermore, while I like these notes very much, I don't think there is a commercial market that justifies turning them into a real textbook, with end-of-the-chapter summaries and exercises and all that. They serve there purpose quite well -- at least well enough -- and that's enough for me.

What about the personal and university identity to be gained by writing a text? Reader Mike Holmes pointed me to a passage on the admissions web site of one of our sister institutions that their "professors actually write the textbooks they and professors at other colleges use". That's a strong marketing point for a school. My university likes to distinguish itself from many larger universities by the fact that our tenured faculty are the ones teaching the classes your students will take; how much better if those professors had written the text! Well, as Mike points out, many of us have had courses with author-professors who were average or worse in the classroom. And if a textbook has few or no external adoptions -- as so, so many CS texts do -- then the students at Author's U. would probably have been better off had the author devoted her textbook-polishing efforts to improving the course.

Maybe this is all just a rationalization of my lack of ambition or creative depth. But I don't think so. I think I'll know when a book I'm meant to write comes along.

Could my work on this blog eventually lead to a book? Another reader once suggested as much. Perhaps. It is certainly good to have an outlet for my ideas, a place where they go from abstractions to prose that might be valuable to someone. The blog is also an encouragement to write regularly, which should help me become a better writer if nothing else. Then, if a book I'm meant to write comes along, I should be prepared to write it.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

December 09, 2006 7:12 AM

An Unplanned Risk

Those of you who are professors or students will appreciate this.

In what was either the stupidest move of my teaching career or a sign of ultimate boldness and confidence, on Tuesday I returned an exam to my students and then closed the session with student evaluations. This wasn't just any exam. The class average score was just under 43%, and only one student scored high enough to pass given the standard grading scale I use -- and that was a C. As I told them then, I doubt they will encounter many other professors who would return such an exam and do student evals the same day.

For those of you who've never had me for a class, let me explain that I do not typically curve individual exams or other assignments. I've always felt that curving exams was a bad idea. I prefer to take a look at the grades as they stand at the end of the semester and make any adjustments to the grading scale then that might be necessary so that final grades reflect the work of the class and individual students. (There is probably another good blog topic for another time.)

Student evaluations play an important role in the academic world. For good or bad, they are one of the few practical, low-cost ways that we have of assessing the quality of teaching. As an instructor, I often enjoy getting feedback on particular elements of my course, especially the new things I have tried, to see how students perceive them and how I might implement them better. Unfortunately, the standard "instrument" that most schools use for student evaluations doesn't give very good "formative feedback" of this sort. Worse, it asks students a lot of questions that they really can't answer with any authority. As a result, these evaluations are also of limited value providing evaluative feedback.

But that is neither here nor there. Student evaluations of one form or another are with us for now, I was planning to have my done last Tuesday. Then I graded the exam. As I recorded the scores, it occurred to me that returning them on evaluation day created an interesting situation. I suppose that I could have postponed one or other to the last day of classes, but putting of either was inconvenient. And to be honest, I really couldn't see myself waiting on either. I have always prided myself in not allowing the presence of outside observers to affect my classroom behavior. As an untenured faculty member, I had visitors from my department's professional assessment committee in classes on several occasions each fall. On principle I preferred that visitors come unannounced, lest I change my behavior in anticipation of their presence -- or that they think I might have. In this case, it just seemed wrong to toy with the process, even knowing that the evaluations could be slanted by transitory emotions.

Perhaps rather than being an act of boldness this was an act of trust -- trust in the students to judge the course and my teaching of it fairly, looking at the big picture rather than just the exam score or how they felt that day. Trusting them seemed only fair, as I had just asked them to trust that their final grades would reflect an honest evaluation of their work and learning over the course of the whole semester, and not the scarily low score on the exam I just handed them.

Results from the assessment won't be available for a few weeks. We'll see. As for the exam, I clearly missed the mark. The students thought it too long, but even after grading it I don't think that was the real. Had I given most students twice as much time, they would not have scored significantly better. They simply didn't understand the material well enough. This exam mostly covered computation with sound, with a little bit about text and files at the end. For some reason, the students were not prepared well enough for this exam. I do not have a good sense of why just yet. The easy answer for the instructor is always that they students didn't study hard enough, or take the material seriously. While that can be true, it's usually just a reflexive excuse that saves the instructor's own feelings. In what way did my in-class exercises, demonstrations, and explanations not do the job? More work to be done...


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 02, 2006 10:42 AM

Writing Code

Here's the thing. I like to write code.

I like to write code that most people take for granted. I like to write code to solve hard problems. I like to write simple programs. I like to solve the programming assignments that I set before my students. I like to discover problems to solve and then solve them with code. Sometimes, I like to make up problems just so I can write code to solve them.

A colleague of mine is fond of reminding university professors that they are not like most of our students. He means that in the sense that we professors were usually good students, or at least students who liked school, and that we can't expect our students to think the way we do or to like school the way we did. This can be useful as we design our courses and plan our day-to-day interactions with students. It's wise for me to remember that I am probably not like all of my students in another way: I just love to write code.

One of my great joys as an instructor is to come across a student, or even a class full of students, who love to write code. I enjoy working with them in class, and on independent projects, and on undergraduate research. I learn from them, and I hope they learn a little from me along the way:

Ultimately we learn best by placing our confidence
in men and women whose examples invite us
to love what they love.
-- Robert Wilken

One thing is certain. I love to write code.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 28, 2006 1:28 PM

Market Timing and Learning

I am a dinosaur who still keeps track of our family finances by hand. (But not for long; one of my Christmas break projects is to convert to Quicken.) While going through some brokerage statements over the long Thanksgiving weekend, an analogy between investing the stock market and learning occurred to me. Let me know what you think.

"Market timing" is the attempt to invest money in a stock or a market when it is near its lowest price and then to cash out when the price is near its peak. With a perfect model, I could invest when the price is at its minimum and divest when the price is at its maximum, with a maximal profit as my perfect prize. Market timing is a risky endeavor that almost always fails. Why? Because no one has a very good idea about how to recognize maximum and minimum prices. That may not seem so bad; even with prices only near their minimum and maximum, an investor could do quite well. But the risk is much worse than that,

Many investing gurus are keen to point out that most of the gain in the stock market over the last 80 years or more have occurred on relatively few days. In the past I have seen a list of the ten days with the largest gains in the history of the New York Stock Exchange and the Dow-Jones Industrial Average Any investor who was out of the market on these days missed out on a significant percentage of the market's gain over the last century. I can't seem to find such a list on the web just now, but I did find this page that tells a similar tale:

Consider, for instance, the returns on small company stocks between 1925 and 1992. If you had been invested in small company stocks over this period, your average annual return would have been 12.1%. If you sat out the single best month during that 67 year period, you would have only made 11.2% a year. If you missed out on the best five months, well, forget it... you would have only notched gains of 8.5%. Finally, if you had missed the best ten months -- something all sorts of market timers managed to do in 1995 -- you would have only retained 6.3% annual gains, almost half of what you could have made had you been fully invested.

I think that learning works the same way. Once before, I related this phenomenon to negative splits in running: the gains a learner makes may be relative small early in a course and relatively larger later in the semester, as the material steeps in her mind and is processed both consciously and subconsciously. I first experienced this is a course as an undergraduate when I plodded along for eight weeks before I somehow "got it".

As near as I can tell, the day on which one gets it is often unpredictable. Sure, some courses and some kinds of material can be learned steadily through a methodical process. But most of the best kind of learning involves a shift in how we see problems or how we see understand solutions, and this kind of learning usually seems to bear an a-ha! moment.

If I think of my effort studying a new topic or learning a new technique as an investment of mental energy, then I don't want to find myself in the position to "time the market" -- trying to guess the right day or few right days to be thinking hard about the material or practicing the skill. The a-ha! moment will come when I least expect, and if I am "out of the market" that day -- waiting until next weekend to start my programming assignment, or skipping a day of study to work on something else, or doing the work but merely going through the motions -- then I will miss the chance to make the big stride to the next level of understanding. And like the investing sort of market timing, the risk is pernicious. Not only won't I end up with the big gain; I may end up with nothing at all to show for my study, just a time spent and no understanding of anything of consequence. Learning is cumulative, and unless we give it a chance to accrue and to reinforce itself, we don't accumulate anything.

Sadly, today's students often operate under conditions that require a random sort of market timing. Many of them work far too many hours to allow them to work each day on each of their courses, at least not in a full-engaged way. Some of them take too many classes, in an effort to graduate "on time" or as soon as possible. (The rush is sometimes driven by financial considerations, but sometimes by misplaced ambition.) Many students come to college these days with many other interests, curricular or extracurricular, that interfere with their classwork. The result is hit-or-miss study, too many days without thinking about a particular course, and little understanding.

I think that this analogy applies to teaching, too. Unless I stay engaged with my course and my students throughout the semester, then I will likely be unprepared when opportunity's knock. Indeed, opportunity probably knocks at all only because we stay engaged and make the conditions possible for a visit.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 21, 2006 5:31 PM

Exuberance and Fear in the Classroom

One measure of how busy I am is how much outside reading I am able to do. By that measure, this term has been busier than most. In the last few days, I've tried to catch up on some blog reading.

Last month Kathy Sierra posted two short pieces that ring true as I reach the end of my first semester back in CS1 in oh so long. Her primary context is on the business place, with managers and employees and products and users. But her advice is a useful starting point in the context of an academic department, with its chair and faculty and courses and students.

In Knocking the Exuberance out of Employees, Sierra reminds us how easy it is to say that we value creativity and curiosity yet create an environment that not only devalues these traits but even penalizes them. As a teacher, I say that I value creativity and curiosity in my students, but I have to attend to creating a course in which students feel free to have and express these traits.

I think that I have done reasonably well this semester in not knocking the exuberance out of my students, at least in gratuitous ways. With the media computation theme, I have tried to:

  • present students with problems that they care about
  • give them freedom in how they approach problems
  • eliminate unnecessary constraints on their solutions

Earlier in my academic career, I was more prone to violating the last two of these. I know that students in our department struggle with the last of these in one course in particular, especially in the form of ticky-tack style requirements. I understand why some faculty impose rules -- they are convinced that there is a right way to do things and want their students to learn the right way sooner. Even if there is a right way, though, instructors have to walk the line between helping students learn to "do things right" while keeping them interested and motivated enough to want to get better. I also know that giving students latitude requires exercising latitude in judging how far is far enough. Without confidence in one's own ability, an instructor often feels safer within constraining rules. But will students live comfortably there, too? Often not.

Sierra writes about similar issues from the user's perspective in Reducing Fear is the Killer App. Users won't feel comfortable to cozy up to your product if they are afraid -- of breaking something, of feeling stupid, of most anything. Students are in a similar frame of mind when they approach a course, and students who are just beginning college, or their major, are most at risk. They want to do well in the course. They want to enjoy their new major. They want to master tools and ideas.

How can an instructor reduce fear? I can think of a few ways.

  • taking small steps in class material and in expectations for student work
  • using simple-enough tools, tools that do not cause or exacerbate fear
  • assigning approachable tasks, tasks that students have the ability to solve by working just beyond what they already find comfortable
  • creating a warm environment in class and in the instructor's office

I don't think that my classroom or office say "comfortable" quite in the way the dentist office or hospital do in Sierra's pictures. My office certainly looks like a place that someone works. (In the common phrase of the day, my office "looks lived in".) I've tried to rely on my textbook authors' experience by sticking to the textbook as much as my constitution allows, in an effort to take the right sort of steps and set the right sort of problems before the class. However, I'm not a "natural" teacher, at least not the kind of natural teacher who makes instant personal bonds with his students, who sets them at ease with the twinkle of an eye. My hope is that, by consciously thinking about the things Sierra writes about in these two essays, I can at least do no harm.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 18, 2006 9:17 AM

No Shortcuts

I overhead a conversation in the locker room yesterday that saddened me. Two university students were chatting about their lives and work-outs. In the course of discussing their rather spotty exercise routines, one of them said that he was planning to start using creatine as a way to develop a leaner "look". Creatine is a naturally-occurring compound that some folks use as a nutritional supplement.

Maybe I'm a fuddy-duddy, but I'm a little leery about using supplements to enhance my physical appearance and performance. It also may just be a matter of where to draw the line; I am willing to take multivitamins and zinc supplements for my immune system. The casual use of creatine by regular guys, though, seems like something different: an attempted shortcut.

There aren't all that many shortcuts to getting better in this world. Regular exercise and a good diet will help you develop a leaner body and the ability to perform better athletically. The guys I overhead knew that they could achieve the results they needed by exercising and cutting back on their beer consumption, but they wanted to reach their goal without having to make the changes needed to get there in the usual way.

The exercise-and-diet route also has other positive effects on one's body and mind, such as increased stamina and better sleep. Taking a supplement may let you target a specific goal, but the healthier approach improves your whole person.

Then there's the question of whether taking a supplement actually achieves the promised effect...

These thoughts about no shortcuts reminded me of something I read on Bob Martin's blog a few weeks ago, called WadingThroughCode. There Bob cautioned against the natural inclination not to work hard enough to slog through other people's programs. We all figure sometimes that we can learn more just by writing our own code, but Bob tells us that reading other people's code is an essential part of a complete learning regimen. "Get your wading boots on."

I've become sensitized to this notion over the last few years as I've noticed an increasing tendency among some of even my best students to not want to put in the effort to read their textbooks. "I've tried, and I just don't get it. So I just study your lecture notes." As good as my lecture notes might be, they are no substitute for the text. And the student would grow by making the extra effort it takes to read a technical book.

There are no shortcuts.


Posted by Eugene Wallingford | Permalink | Categories: General, Running, Teaching and Learning

November 17, 2006 4:36 PM

More Serendipity

When my CS1 class and I explored compression recently, it occurred to one student that we might be able to compress our images, too. Pictures are composed of Pixels, which encode RGB values for colors. We had spent many weeks accessing the color components using accessors getRed(), getGreen(), and getBlue(), retrieving integer values. But the color component values lie in the range [0..255], which would fit in byte variables.

So I spent a few minutes in class letting students implement compression and decompression of Pixels using one int to hold the three bytes of data. It gave us a good set of in-class exercises to practice on, and let students think about compression some more.

The we took a peek at how Pixels are encoded -- and found, much to the surprise of some, that the code for our text already uses our compressed encoding! We had reinvented the existing implementation.

I didn't mind this at all; it was a nice experience. First, it helped students to see very clearly that there does not have to be a one-to-one relationship between accessors and instance variables. getRed(), getGreen(), and getBlue() do not retrieve the values of separate variables, but rather the corresponding bytes in a single integer. This point, that IVs != accessors, is one I like to stress when we begin to talk about class design. Indeed, unlike many CS1 instructors, I make a habit of creating accessors only when they are necessary to meet the requirements of a task. Objects are about behavior, not state, and I fear that accessors-by-default gives a wrong impression to students. I wonder if this is an unnecessary abstraction that I introduce too early in my courses, but if so then it is just one of my faults. If not, then this was a great way to experience the idea that objects provide services and encapsulate data representation.

Second, this gave my students a chance to do a little bit arithmetic, figuring out how to use multiplication to move values into higher-order bytes of an integer. Then we looked inside the Pixel class, we got to see the use of Java's shift operators to accomplish the same goal. This was a convenient way to see a little extra Java without having to make a big fuss about motivating it. Our context provided all the motivation we needed.

I hope the students enjoyed this as much as I did. I'll have to ask as we wrap up the course. (I should practice what I preach about feedback!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 16, 2006 5:28 PM

"No, We're Laughing with You"

It seemed an innocent enough question to ask my CS 1 class.

"Do you all know how to make a .zip file?"

My students laughed at me. As near as I could tell, it was unanimous.

For a brief second I felt old. But it wasn't that long ago that students in my courses had to be shown how to zip up a directory, so perhaps their reaction is testimony more to the inexorable march of technology than to my impending decrepitude.

At least most of them seemed interested when I offered to show them how to make a .jar file en route to creating a double-clickable app from their slideshow program.

I may be a dinosaur, but I'm not completely useless to them.

Yet.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

November 14, 2006 4:32 PM

When Opportunity Knocks

When teaching a class, sometimes a moment comes along in that is pregnant with possibilities. I usually like to seize these moments, if only to add a little spice to my own enjoyment of the course. When the new avenue leads to a new understanding for the students, all the better!

A couple of weeks ago, I was getting ready to finish off the unit on manipulating sound in CS 1. The last chapter of that unit in the textbook didn't excite me as much as I had hoped, and I had also just been sensitized by a student's comment. The day before, at a group advising session, one of my students had commented that the media computation was cool and all, but he "data to crunch", and he was hoping to do more of that in class. My first reaction was, isn't processing a 1-megapixel image crunching enough data? Sure, he might say, but the processing we were doing at each pixel, or at each sample in a sound, was relatively simple.

With that in the back of my mind somewhere, I was reading the part of the chapter that discussed different encodings for sound, such as MP3 and MIDI. My stream of consciousness was working independent of my reading, or so it seemed. "MP3 is a way to compress sound ... compression algorithms range from simple to complex operations ... we can benefit greatly by compressing sound ... our Sound uses a representation that is rather inefficient ...". Suddenly I knew what I wanted to do in class that day: teach my students how to compression sound!

Time was short, but I dove into my new idea. I hadn't looked very deeply at how the textbook was encoding sounds, and I'd never written a sound compression algorithm before. The result was a lot of fun for me. I had to come up with a reasonable encoding to compress our sounds, one that allowed me to talk about lossy and lossless compressions; I had to make the code work. Then I had to figure out how to tell the story so that students could reach my intended destination. This story has to include code that the students write, so that they can grow into the idea and feel some need for what I ask them to do.

I ended up creating a DiffSound that encoded sounds as differences between sound samples, rather than as samples. The differences between samples tend to be smaller than the sample values themselves, which gives us some hope of creating a smaller file that loses little or no sound fidelity.

This opportunity had another unexpected benefit. The next chapter of the text introduced students to classes and objects. While we had been using objects of the Picture, Pixel, Sound, and SoundSample classes in a "bottom-up" fashion, but we had never read a full class definition. And we certainly hadn't written one. The textbook used what was for me an uninspiring first example, a Student class that knows its grades. What was worse than not exciting me was that the class was not motivated by any need the students could feel from their own programming. But after writing simple code to convert a sound from an array of sound samples into an array of sample differences, we had a great reason to create a new class -- to encapsulate our new representation and to create a natural home for the methods that manipulate it. When I first embarked on the compression odyssey, I had no idea that I would be able to segue so nicely into the next chapter. Serendipity.

After many years of teaching, bumping into such opportunities, and occasionally converting them into improvements to my course, I've learned a few lessons. The first is that not all opportunities are worth seizing. Sometimes, the opportunity is solely to my benefit, letting me play with some new idea. If it produces a zero sum for my students, then it may be worth trying. But too often an opportunity creates a distraction for students, or adds overhead to what they do, and as a result interferes with their learning. Some design patterns work this way for OOP instructors. When you first learn Chain of Responsibility, it may seem really cool, but that doesn't mean that it fits in your course or adds to what your students will learn. Such opportunities are mirages, and I have to be careful not to let them pull me off course.

But many opportunities make my course better, by helping my students learn something new, or something old in a new way. These are the ideas worth pursuing. The second lesson I've learned is that such an idea usually creates more work for me. It's almost always easier to stay on script, to do what I've done before, what I know well. The extra work is fun, though, because I'm learning something new, too, and getting and chance to write the code and figure out how to teach the idea well. A few years ago, I had great fun creating a small unit on Bloom filters for my algorithms course, after reading a paper on the plane back from a conference. The result was a lot of work -- but also a lot of fun, and also an enrichment to what my students learned about the trade-offs between data and algorithm and between efficiency and correctness. That was an opportunity well-seized. But I needed time to turn the possibility into a reality.

The third lesson I've learned is that using real data and real problems greatly increases the chances that I will see an unexpected opportunity. Images and sounds are rich objects, and manipulating them raises a lot of interesting questions. Were I teaching with toy problems -- converting Fahrenheit to Celsius, or averaging grades in an ad hoc student array -- then the number of questions that might raise my interest or my students' interest would be much smaller. Compression only matters if you are working with big data files.

Finally, I've learned to be open to the possibility of something good. I have to take care not to fall into the rut of simply doing what's in my notes for the day. Eternal vigilance is the price of freedom, but it is also the price we must pay if we want to be ready to be excited and to excite our students with neat ideas lying underneath the surface of what we are learning.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 28, 2006 8:05 PM

OOPSLA This and That

In addition to the several OOPSLA sessions I've already blogged about, there were a number of other fun or educational moments at the conference. Here are a few...

  • Elisa Baniassad presented an intriguing Onward! talk called The Geography of Programming. She suggested that we might learn something about programming language design by considering the differences between Western and Eastern thought. Her motivation came from Richard Nisbett's The Geography of Thought: How Asians and Westerners Think Differently--And Why. One more book added to my must-read list...

  • Partly in honor of OOPSLA stalwart John Vlissides, who passed away since OOPSLA'05, and partly in honor of Vlissides et al.'s seminal book Design Patterns, there was a GoF retrospective panel. I learned two bits of trivia... John's favorite patterns were flyweight (which made it into the book) and solitaire (which didn't). The oldest instance of a GoF pattern they found in a real system? Observer -- in Ivan Sutherland's SketchPad! Is anyone surprised that this pattern has been around that long, or that Sutherland discovered its use over 40 years ago? I'm not.

  • On the last morning of the conference, there was scheduled a panel on the marriage of XP and Scrum in industry. Apparently, though, before I arrived on the scene it had morphed into something more generally agile. While discussing agile practices, "Object Dave" Thomas admitted he believes that, contrary to what many agilists seem to imply, comments in code are useful. After all, "not all code can be read, being encrypted in Java or C++ as it is". But he then absolved his sin a bit by noting that the comment should be "structurally attached" to the code with which it belongs; that is a tool issue.

  • Then, on the last afternoon of the conference, I listened in on the Young Guns panel, in which nearly a dozen computer scientists under the age of 0x0020 commented on the past, present, and future of objects and computing. One of these young researchers commented that scientists tend to make their great discoveries while still very young, because they don't yet know what's impossible. To popularize this wisdom, gadfly and moderator Brian Foote suggested a new motto for our community: "Embrace ignorance."

  • During this session, it occurred to me that I am no longer a "young gun" myself, spending the six last days of my 0x0029th year at OOPSLA. This is part of how I try to stay "busy being born", and I look forward to it every year. I certainly don't feel like an old fogie, at least not often.

  • Finally, as we were wrapping up the conference in the committee room after the annual ice cream social, I told Dick Gabriel that I would walk across the street to hear Guy Steele read a restaurant menu aloud. Maybe there is a little bit of hero worship going on here, but I always seem to learn something when Steele shares his thoughts on computing.

    ----

    Another fine OOPSLA is in the books. The 2007 conference committee is already at work putting together next year's event, to be held in Montreal during the same week. Wish us wisdom and good fortune!


  • Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    October 23, 2006 11:51 PM

    OOPSLA Educators' Symposium 1: Bob Martin on OOD

    Unlike last year, the first day of OOPSLA was not an intellectual charge for me. As tutorials chair, I spent the day keeping tabs on the sessions, putting out small fires involving notes and rooms and A/V, and just hanging out. I had thought I might sneak into the back of a technical workshop or a educational workshop, but my time and energy were low.

    Today was the Educators' Symposium, my first since chairing in 2004 and 2005. I enjoyed not having to worry about the schedule or making introductions. The day held a couple of moments of inspiration for me, and I'll write about at least one -- on teaching novices to design programs -- later.

    Uncle Bob Martin

    The day's endpoint was an invited talk by Robert Uncle Bob Martin. In a twist, Bob let us choose the talk we wanted to hear: either his classic "Advanced Principles of Object-Oriented Class Design" or his newer and much more detailed "Clean Code".

    While we thought over our options, Martin started with a little astronomy lesson that involved Henrietta Levitt, globular clusters, distance calculations, and the width of our galaxy. Was this just filler? Was there a point related to software? I don't know. Maybe Bob just likes telescopes.

    The vote of the group was for Principles. Sigh. I wanted Clean Code. I've read all of Bob's papers that underlie this talk, and I was in the mood for digging into code. (Too many hours pushing papers and attending meetings will do that to a guy who is Just a Programmer at heart.)

    But this was good. With some of the best teachers, reading a paper is no substitute for the performance art of a good presentation. This blog can't recreate the talk, so if you already know Martin's five principles of OO design, you may want to move on to his latest ruminations on naming.

    At its core, object-oriented design is about managing dependencies in source code. It aims to circumvent the problems inherent in software with too many dependencies: rigidity, fragility, non-reusability, and high viscosity.

    Bob opened with a simple but convincing example of a copy routine, a method that echoes keyboard input to a printer, to one of his five principles, the Dependency Inversion Principle. In procedural code, dependency flows from the top down in a set of modules. In OO code, there a point where top-down dependencies end at an abstraction like an interface, and the dependencies begin to flow up from the details to the same abstractions. My favorite detail in this example was his pointing out that getch() is polymorphic method, programmed to the abstractions of standard input and standard output. ("Who needs all that polymorphism nonsense," say my C-speaking colleagues. "Let's teach students the fundamentals from the bottom up." Hah!)

    Martin then went through each of his five principles, including DIP, with examples and a theatrical interaction with his audience. Here are the highlights to me:

    • In the Single Responsibility Principle, a class should have one and only one reason to change.

      Side lesson: There is no perfect way to write code. There are always competing forces. There is a conflict between the design principle encapsulation and the SRP. To ensure the SRP, we may need to relax encapsulation; to ensure encapsulation, we may need to relax the SRP. How to choose? All other things equal, we should prefer the SRP, as it protects us from a clear and present danger, rather than from potential bad behavior of programmers working with our code in the future. A mini-sermon on overprotective languages and practices followed. The Smalltalk programmers in the audience surely recognized the theme.

    • The Open/Closed Principle states that a class should be open for extension and closed for modification. The idea is that any change to behavior should be implemented in new code, not in changes to existing code.

      Side lesson: OO design works best if you can predict the future, so that you can select the abstractions which are likely to change. We can predict the future (or try) either by thinking hard or by letting customers use early versions of the code and reacting to the actual changes they cause. Ceteris paribus, OO design may be better than the alternatives, but it still requires work to get right.

    • The Liskov Substitution Principle is the formal name for one of the central principles of my second-course OO programming course over the last decade, which I have usually called "substitutability". We should be able to drop an instance of a subclass into a variable typed to a superclass -- and not have the client know the difference. In practice, we recognize violations of the LSP when we see limitations in subclasses such as overriding methods throwing exceptions.

    • Of all Martin's principles, the one I tend to teach and consider consciously least often is the Integration Segregation Principle. Rather than writing classes that depend on a "fat" class that offers many services, we should segregate clients into sets according to their thinner needs by means of interfaces.

    • Finally, we return to the Dependency Inversion Principle. In some ways, this is the most basic of the design ideas in Martin's cadre, and it is the one by which we can often recognize OO pretenders from those who really get it. Details in our programs should depend on abstractions, not the other way around. If you call a method, create a subclass, override a method ... you wish the depended-on method or class to be abstract. Abstractions hide the client code from changes that almost necessarily follow concrete implementations.

      Side lesson: Another conflict we face when we program is that between flexibility and type safety. As with the SRP and encapsulation, to achieve one of flexibility and type safety in practice, we often have to relax the other. Martin described how the trend toward test-driven design and full-coverage unit testing eliminates many of the compelling reasons for us to seek type safety in a language. The result is that in this competition between forces we can favor flexibility as our preferred goal -- and choose the languages we use differently! Ruby, Python, duck typing... these trends signal our ability to choose empowering, freeing languages as long as we are willing to achieve the value of type safety through the alternate means of tests.

    The talk just ended with no summary or conclusion, probably due to us being short on time. Martin had been generous with his audience interaction throughout, asking and answering questions and playfully enjoying all comments. The talk really was quite good, and I can see how he has become one of the top draws on the OO speaking circuit. At times he sounded like a preacher, moving and intoning in the rhythms of a revival minister. Still, one of his key points was that OO as religion isn't nearly as compelling as OO as proven technique for building better software.

    The teacher in me left the symposium feeling a little bit second-best. How might I speak and teach classes in a way that draws such rapt interest from my audience? Could I teach more effectively if I did? The easy answer is that this is just not my style, but that may be an excuse. Can I -- should I -- try to change? Or should I just try to maximize what I do well? (And just what is that?)


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    October 13, 2006 11:00 AM

    Student Entrepreneurship -- and Prosthetics?

    Technology doesn't lead change.
    Need leads change.

    -- Dennis Clark, O&P1

    This morning I am hanging out at a campus conference on Commercializing Creative Endeavors. It deals with entrepreneurship, but from the perspective of faculty. As a comprehensive university, not an R-1 institution, we tend to generate less of the sort of research that generates start-ups. But we do have those opportunities, and even more so the opportunity to work with local and regional industry.

    The conference's keynote speaker is Dennis Clark, the president of O&P1, an local orthotics and prosthetics company that is doing amazing work developing more realistic prosthetics. His talk focused on his collaboration with Walter Reed Army Medical Center to augment amputees from the wars in Afghanistan and Iraq. It is an unfortunate truth that war causes his industry to advance, in both research and development. O&P1's innovations include new fabrication techniques, new kinds of liners and sensors, and shortened turnaround times in the design, customization, fabrication, and fitting prosthetics for specific patients.

    Dennis has actively sought collaborations with UNI faculty. A few years ago he met with a few of us in computer science to explore some project ideas he had. Ultimately, he ended up working closely with two professors in physics and physical education on a particular prosthetics project, specifically work with the human gait lab at UNI, which he termed "world-class". That we have a world-class gait lab here at UNI was news to me! I have never worked on human gait research myself, though I was immediately reminded of some work on modeling gait by folks in Ohio State's Laboratory for AI Research, which was the intellectual progenitor of the work we did in our intelligent systems lab at Michigan State. This is an area rich in interesting applications for building computer models that support analysis o gait and the design of products.

    As I listened to Dennis's talk this morning, two connections to the world of computing came to mind. The first was to understand the remarkable amount of information technology involved in his company's work, including CAD, data interchange, and information storage and retrieval. As in most industries these days, computer science forms the foundation on which these folks do their work, and speed of communication and collaboration are a limiting factor.

    Second, Dennis's description of their development process sounded very much like a scrapheap challenge a lá the OOPSLA 2005 workshop of the same name. Creating a solution that works now for a specific individual, whose physical condition is unique, requires blending "space-age technology" with "stone-age technology". They put together whatever artifacts they have available to make what they need, and then they can step back and figure out how to increase their technical understanding for solving similar problems in the future.

    The Paul Graham article I discussed yesterday emphasized that students who think they might want to start their own companies should devote serious attention to finding potential partners, if only by surrounding themselves with as many bright, ambitious people as possible. But often just as important is considering potential partnerships with folks in industry. This is different than networking to build a potential client base, because the industrial folks are more partners in the development of a business. And these real companies are a powerful source of problems that need to be solved.

    My advice to students and anyone, really, is to be curious. If you are a CS major, pick up a minor or a second major that helps you develop expertise in another area -- and the ability to develop expertise in another area. Oh, and learn science and math! These are the fundamental tools you'll need to work in so many areas.

    Great talk. The sad thing is that none of our students heard it, and too few UNI faculty and staff are here as well. They missed out on a chance to be inspired by a guy in the trenches doing amazing work, and helping people as the real product of his company.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    October 12, 2006 4:51 PM

    Undergraduates and Start-Ups

    I enjoy following Paul Graham's writings about start-up companies and how to build one. One of my goals the last few years, and especially now as a department head, is to encourage our students to consider entrepreneurship as a career choice. Some students will be much better served, both intellectually and financially, by starting their own companies, and the resulting culture will benefit other students and our local and state business ecosystem.

    Graham's latest essay suggests that even undergraduate students might consider creating a start-up, and examines the trade-offs among undergrads, grad students, and experiences folks already in the working world. He has a lot more experience with start-ups than I, so I figure his suggestions are at least worth thinking about.

    Some of his advice is counterintuitive to most undergrads, especially to the undergrads in the more traditional Midwest. At one point, Graham tells students who may want to start their own companies that they probably should not take jobs at places where they will be treated well -- even Google.

    I realize this seems odd advice. If they make your life so good that you don't want to leave, why not work there? Because, in effect, you're probably getting a local maximum. You need a certain activation energy to start a startup. So an employer who's fairly pleasant to work for can lull you into staying indefinitely, even if it would be a net win for you to leave.

    It's hard for the typical 22-year-old to pass up a comfortable position at a top-notch but staid corporation. Why not enjoy your corporate work until you are ready to start your own company? You can make connections, build experience, learn some patterns and anti-patterns, and save up capital. The reason is that in short order you can create an incredible inertia against moving on -- not the least of which is the habit of receiving a comfortable paycheck each week. Going back to ramen noodles thrice daily is tough after you've had three squares paid for by your boss. I give similar advice to undergrads who say, "I plan to go to graduate school in a few years, after I work for a while." Some folks manage this, but it's harder than it looks to most students.

    I also give my students a piece of advice similar to another of Graham's suggestions:

    Most people look at a company like Apple and think, how could I ever make such a thing? Apple is an institution, and I'm just a person. But every institution was at one point just a handful of people in a room deciding to start something. Institutions are made up, and made up by people no different from you.

    The moral is: Don't be intimidated by a great company. Once it was just a few people mostly like us who had an idea and a willingness to make their idea work. I give similar advice about programs that intimidate my students, language interpreters and compilers. One of the morals of my programming languages and compilers courses is that each of these tools is "Just Another Program". Written by a knucklehead just like me. Learn some basic techniques, apply your knowledge of programming and data structures to the various sub-problems faced when building a language processor, and you can write one, too.

    This reference to professors raises another connection to Graham's advice, regarding how many students who want to create a start-up mistake the implementation of their idea as a commercial product with just a big class project:

    That leads to our second difference [between a start-up's product and a class project]: the way class projects are measured. Professors will tend to judge you by the distance between the starting point and where you are now. If someone has achieved a lot, they should get a good grade. But customers will judge you from the other direction: the distance remaining between where you are now and the features they need. The market doesn't give a shit how hard you worked. Users just want your software to do what they need, and you get a zero otherwise. That is one of the most distinctive differences between school and the real world: there is no reward for putting in a good effort. In fact, the whole concept of a "good effort" is a fake idea adults invented to encourage kids. It is not found in nature.

    The connection between effort, grade, and learning is not nearly as clean as most students think. Some courses require a lot of effort and require little learning; those are the courses that most of us hate. Sometimes one student has to exert much more effort than another to learn the concepts of a course, or to earn an equivalent grade. Every student starts in a different place, and courses exert different forces on different students. The key is to figure out which courses will best reward hard work -- preferably with maximum learning -- and then focus more of our attention there. Time and energy are scarce quantities, so we usually have to ration them.

    If an undergraduate knows that she wants to start her own company, she has a head start in making this sort of decision about where to exert their learning efforts:

    Another thing you can do [as an undergrad, to prepare to start your own company] is learn skills that will be useful to you in a startup. These may be different from the skills you'd learn to get a job. For example, thinking about getting a job will make you want to learn programming languages you think employers want, like Java and C++. Whereas if you start a startup, you get to pick the language, so you have to think about which will actually let you get the most done. If you use that test you might end up learning Ruby or Python instead.

    ... or Scheme! (These days, I think I'd go with Ruby.)

    As in anything else, having some idea about what you want from your future can help you make better decisions about waht you want to do now. I admire young people have a big dream even as undergrads; sometimes they create cool companies. They also make interesting students to have in class, because their goals have a groundedness to them. They ask interesting questions, and sometimes doze off after a long night trying something new out. And even with this lead in making choices, they usually get out into the world and end up thinking, "Boy, I wish I had paid more attention in [some course]." Life is usually more complicated than we expect, even when we try to think ahead.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    September 24, 2006 11:36 AM

    The Aims of Education

    I was cleaning out my briefcase and found a paper I read last summer, Alfred North Whitehead's The Aims of Education. It's not about computing, of course, but it does argue quite nicely that we should set higher goals for our institutions of education than just knowledge, or just training for a trade. In particular, we should aim to produce people "who possess both culture and expert knowledge in some special direction. Their expert knowledge will give them the ground to start from, and their culture will lead them as deep as philosophy and as high as art." I think that we can aim this high in computer science, and even in our introductory courses.

    For Whitehead, to become educated is to learn the art of using of knowledge. The using of knowledge is essential; by themselves, facts and rules are mere trivia to clutter the mind. We should always give students the chance to do something meaningful while they learn -- say, manipulate media files -- as they learn. Some believe that teaching students abstract knowledge is valuable as a means of "sharpening" their minds, to ready them for real thought later. But pumping inert knowledge into the minds of students is a waste of time at best and hurtful to them at worst:

    The mind is never passive; it is a perpetual activity, delicate, receptive, responsive to stimulus. You cannot postpone its life until you have sharpened it.

    The mind is already alive, eager to do something worthwhile. If our topic elicits their interest, then our instruction must let them run with it now. Only then will they learn anything useful.

    Whitehead argues strongly that students must learn both broadly, to become a cultured citizen, and deeply in some area, in order to fully appreciate the power and beauty of ideas.

    What education has to impart is an intimate sense for the power of ideas, for the beauty of ideas, and for the structure of ideas, together with a particular body of knowledge which has peculiar reference to the life of the being possessing it.

    I love that phrase: an intimate sense for the power of ideas, for the beauty of ideas, and for the structure of ideas. But this sense of intimacy can come only when the student studies some area of knowledge deeply:

    The appreciation of the structure of ideas is that side of a cultured mind which can only grow under the influence of a special study. ... Nothing but a special study can give any appreciation for the exact formulation of general ideas, for their relations when formulated, for their service in the comprehension of life. A mind so disciplined should be both more abstract and more concrete. It has been trained in the comprehension of abstract thought and in the analysis of facts.

    Whitehead felt that English education at the time he wrote this essay (1929) suffered from a lack of this deep focus, from a need for the sort of special study that develops foresight:

    The profound change in the world which the nineteenth century has produced is that the growth of knowledge has given foresight. The amateur is essentially a man with appreciation and with immense versatility in mastering a given routine. But he lacks the foresight which comes from special knowledge.

    In short, expertise matters!

    But does it matter what sort of expertise the student develops, in the arts or the sciences, in literature or technology? Not really, because the ultimate destination is the same in all these areas: style.

    Finally, there should grow the most austere of all mental qualities; I mean the sense for style. It is an aesthetic sense, based on admiration for the direct attainment of a foreseen end, simply and without waste. Style in art, style in literature, style in science, style in logic, style in practical execution have fundamentally the same aesthetic qualities, namely, attainment and restraint. The love of a subject in itself and for itself, where it is not the sleepy pleasure of pacing a mental quarter-deck, is the love of style as manifested in that study.

    Here we are brought back to the position from which we started, the utility of education. Style, in its finest sense, is the last acquirement of the educated mind; it is also the most useful. It pervades the whole being. The administrator with a sense for style hates waste; the engineer with a sense for style economises his material; the artisan with a sense for style prefers good work. Style is the ultimate morality of mind.

    He closes his essay with a claim that education is essentially a religious exercise, though not religious in the sense the word is typically used.

    A religious education is an education which inculcates duty and reverence. Duty arises from our potential control over the course of events. Where attainable knowledge could have changed the issue, ignorance has the guilt of vice. And the foundation of reverence is this perception, that the present holds within itself the complete sum of existence, backwards and forwards, that whole amplitude of time, which is eternity.

    One of the beautiful things about computer science is that we can learn powerful ideas, useful ideas, meaningful ideas -- and we learn how to make them come alive in programs. We can create and control systems using these ideas. We can watch their effects on the world, both at the level of technology and at the level of the people who use the technology, or whose lives are otherwise made better by the technology's presence.

    Not a bad little find, this essay.

    ----

    Postscript: When Whitehead discusses education as "the acquisition of the art of the utilisation of knowledge", he writes a paragraph reminiscent of thoughts I had when thinking about textbooks a few months ago:

    This is an art very difficult to impart. Whenever a textbook is written of real educational worth, you may be quite certain that some reviewer will say that it will be difficult to teach from it. Of course it will be difficult to teach from it. If it were easy, the book ought to be burned; for it cannot be educational. In education, as elsewhere, the broad primrose path leads to a nasty place. This evil path is represented by a book or a set of lectures which will practically enable the student to learn by heart all the questions likely to be asked at the next external examination.

    Introduction to Computing ... A Multimedia Approach

    I am certain that many of my colleagues would find the textbook I'm using in CS1 this fall, Guzdial and Ericson's Introduction to Computing and Programming with Java: A Multimedia Approach, difficult to use. The topics are in a different order from other CS1 books. The ends of the chapters don't provide lots of handy little exercise on if statements and for loops that many professors like to assign in homework and offer on exams. It doesn't even have the sort of programming assignments one might expect. Instead, it asks interesting questions about media, and computing with media. It asks students to extend what they've learned to some new facet of the current topic. But there is no right answer to most of these questions, which leaves students and instructor to exercise judgment in selecting tasks and evaluating work. Shocking! Maybe some real education will take place.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    September 21, 2006 5:58 PM

    An Audience of One

    What if they through a talk and no one came?

    I went to a talk on teaching here today and was the only member of the audience. The speaker came, of course, and the organizer of the talk, too. But then there was just me. The speaker honored me by giving his talk anyway. During the session, a fourth person arrived, and he was half-audience and half-expert.

    The talk was on Small Group Instructional Diagnosis (SGID), a technique that helps faculty receive information on how well a course is going. The technique resembles the writers' workshop used in the creative writing world and the software patterns community. In the course of a class period, a moderator -- a person with no connection to the students or course, and preferably not in a power relationship with the instructor -- poses three or four questions to the students and then works with them via group discussion to arrive at a consensus about what is working in the course and what could stand improvement. The moderator requires certain skills at guiding discussion and framing the points of consensus. The author -- the instructor -- is not present to hear the discussions; instead, the moderator meets with the instructor soon after the diagnosis to present the feedback and to discuss the course. Much like a PLoP workshop group, instructors often serve in round-robin as moderators of SGIDs for one another. SGIDs are usually done during the semester, after students have enough time to know the course and instructor but early enough that the professor can use the information to improve the course content, structure, delivery, etc.

    Many instructors might think of this as useful only for "bad teachers" who need to get better. But I think that even the best instructors can get better. Getting feedback and using it to inform one's practice seems like a good idea for any instructor. The colleague who gave this presentation, a math professor, is widely recognized as one of the best teachers at my institution, and he has used SGIDs in his own courses. I can imagine having a SGID done in one of my courses, and I can also imagine offering this tool as a possibility to a faculty member who came to me looking for ways to improve their teaching. I can even imagine using the tool to diagnose a particular instance of a course -- not because I think that there is something intrinsically wrong in my approach, but because the particular mix of me, the course, and the student body in the course don't seem to be working.

    The similarity between the SGID and a writer's workshop seemed strong. I'm thinking about how one might augment the process I was shown using ideas from PLoP workshops, such as the summary the workshop group does before moving on to "things we like" and "ways we think the author could improve the work".

    Also much like the PLoP experience, this process requires that a teacher take the risk to have students discuss their work openly in front of a third party and be willing to listen to feedback and fold it back into the work. Many writers are uncomfortable with this idea, and I know that many, many university professors are uncomfortable with this idea. But getting better usually requires an element of risk, if only by allowing honest discussion to take place.

    I'm glad I was the one person who showed up for the talk. I learned from the presentation, of course, but the discussion that took place afterward, in which the half-and-half latecomer described his teaching career and the role an SGID had in helping him earn tenure, was even more illuminating. He was a good storyteller.

    By the way, there is still plenty of time to register for PLoP 2006, which is collocated in Portland this year with OOPSLA. I'm looking forward to both conferences, though I'm sad that I won't be able to run at Allerton Park before, during, and after PLoP!


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    September 18, 2006 6:03 PM

    Professors Who Code

    Steve Yegge's entertaining fantasy on developing programmers contains a respectful but pointed critique of the relevance of a Ph.D. in computer science to contemporary software development, including:

    You hire a Ph.D., it's hit-or-miss. Some of them are brilliant. But then some subset of virtually every educated group is brilliant. The problem is that the notion of a Ph.D. has gradually been watered down for the last century. It used to mean something to be a Doctor of Philosophy: it meant you had materially advanced your discipline for everyone. Von Neumann, Nash, Turing -- people like that, with world-changing dissertations, ....

    ... These kids have to work hard for their Ph.D., and a lot of them never quite finish. But too often they finish without having written more than a few hundred lines of code in the past five years. Or they've over-specialized to the point where they now think Big-O is a tire company; they have no idea how computers or computation actually work anymore. They can tell you just about everything there is to know about SVM kernels and neural-net back propagation, or about photorealistic radiosity algorithms that make your apartment look fake by comparison. But if you want a website thrown together, or a scalable service written, or for that matter a graphics or machine-learning system, you're usually better off hiring a high-school kid, because the kid might actually know how to program. Some Ph.D.s can, but how many of them is it, really? From an industry perspective, an alarming number of them are no-ops.

    Ouch. I'm sure Steve is exaggerating for dramatic effect, but there is a kernel of truth in there. People who study for a Ph.D. in computer science are often optimizing on skills that are not central to the industrial experience of building software. Even those who work on software tend to focus on a narrow slice of some problem, which means not studying broadly in all of the elements of modern software development.

    So, if you want to apprentice with one person in an effort to learn the software industry, you can often find a better "master" than by selecting randomly among the run-of-the-mill CS professors at your university. But then, where will you connect with this person, and how will you convince him or her to carry you while you slog through learning the basics? Maybe when Wizard Schools are up and running everywhere, and the Ward Cunninghams and Ralph Johnsons of the world are their faculty, you'll have a shot. Until then, a CS education is still the most widely available and trustworthy path to mastery of software development available.

    You will, of course, have to take your destiny into your own hands by seeking opportunities to learn and master as many different skills as you can along the way. Steve Yegge reminds his readers of this all the time.

    In this regard, I was fortunate in my graduate studies to work on AI, in particular intelligent systems. You might not think highly of the work done by the many, many AI students of the 1980s. Paul Graham had this to say in a recent essay:

    In grad school I was still wasting time imitating the wrong things. There was then a fashionable type of program called an expert system, at the core of which was something called an inference engine. I looked at what these things did and thought "I could write that in a thousand lines of code." And yet eminent professors were writing books about them, and startups were selling them for a year's salary a copy. What an opportunity, I thought; these impressive things seem easy to me; I must be pretty sharp. Wrong. It was simply a fad.

    But whatever else you say, you have to admit that most of us AI weenies produced a lot of code. The AI and KBS research groups at most schools I knew sported the longest average time-to-graduate of all the CS areas, in large part because we had to write huge systems, including a lot of infrastructure that was needed in order to do the highly-specialized whatever we were doing. And many of us wrote our code in one of those "super-succinct 'folding languages'" developed by academics, like Lisp. I had the great good fortune of schlocking a non-trivial amount of code in both Lisp and Smalltalk. I should send my advisor a thank-you note, but at the time we felt the burden of all the code we had to produce to get to the place where we could test our cool ideas.

    I do agree with Yegge that progressive CS departments need to work on how better to prepare CS graduates and other students to participate in the development of software. But we also have to wrestle with the differences between computer science and software development, because we need to educate students in both areas. It's good to know that at least a few of the CS professors know how to build software. Not very many of us know when to set the fourth preference on the third tab of the latest .NET development wizard, but we do have some idea about what it's like to build a large software system and how students might develop the right set of skills to do the same.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    September 01, 2006 5:39 PM

    Entering a Long Weekend

    We've been in class for only two weeks, but I am ready for the long Labor Day weekend. The steady onslaught into the department office of beginning-of-the-year events has slowed, and we will soon be into the steady state of the academic semester. Of course, that includes preparing the spring course schedule, handling a tenure case, promoting new programs, writing a newsletter to alumni and other friends of the department, and many other activities, so the steady state isn't a slower place, just a steadier one.

    I'm looking forward to reading this weekend. I've fallen woefully behind of reading my favorite bloggers and essayists. I hope to remedy that while checking out some U.S. Open tennis on television. I did get a chance to read a little bit today and ran across a couple of neat ideas that hit home during a busy week of classes and department duties.

    The creation of art is not the fulfillment of a need but the creation of a need.

    -- Louis Kahn

    I've written on this topic before as it relates to design, but Kahn's line struck me today in the context of education. As much as we educators need to be pragmatic enough to think about how we serve a clientele of sorts, it is good to remember that a university education done well creates a need for it in the mind of the learner that didn't exist before. Even education in a technical area like computer science.

    Then there was this short post on Belief by Seth Godin:

    People don't believe what you tell them.

    They rarely believe what you show them.

    They often believe what their friends tell them.

    They always believe what they tell themselves.

    If we want to affect how students act and think, then we can't just tell them good stories or show them cool stuff. We have to get them to tell themselves the right stories.

    More reading will be good. I'll also have a chance to do some relaxed thinking about my CS 1 course, as we move into real programming -- foreach loops! But I have some home duties to take care of as well, and I don't want to be this guy, even if I know in my heart that it is easy to be him. Any work I do this weekend will be firmly ensconced in the life of my family. I'll just do my homework at the dining room table with my daughters...


    Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

    August 31, 2006 5:12 PM

    Names and Jargon in CS1

    As I've mentioned a few times in recent months that I am teaching our intro course this semester for the first time in a decade or so. After only two weeks, I have realized this: Teaching CS1 every so often is a very good idea for a computer science professor!

    With students who have no programming background, I cannot take anything for granted: values, types, variables, ... expressions, statements, functions/methods ... reference, binding ... so many fundamental concepts to build! In one sense, we have to build them from scratch, because students have no programming knowledge to which we can connect. But they do have a lot of knowledge of the world, and they know something about computers and files as users, and we can sometimes get leverage from this understanding. So far, I remain of the view that we can build the concepts in many different orders. The context in which students learn guides the ordering of concepts, by making some concepts relatively more or less "fundamental". The media computation approach of Guzdial and Ericson has been a refreshing change for me -- the idea of a method "happened" naturally quite early, as a group of operations that we have applied repeatedly from Dr. Java's interactions pane. Growing ideas and programs this way lets students learn bottom-up but see useful ideas as soon as they become useful.

    I've spent a lot of time so far talking about the many different kind of names that we use and define when thinking computationally. So much of what we do in computing is combining parts, abstracting from them an idea, and then giving the idea a name. We name values (constant), arbitrary or changing values (variable), kinds of values (type, class), processes (function, method)... Then we have arguments and parameters, which are special kinds of arbitrary values, and even files -- whose names are, in an important way, outside of our programs. I hope that my students are appreciating this Big Idea already.

    And then there is all of the jargon that we computer folks use. I have to assume that my students don't know what any of that jargon means, which means that (1) I can't use much, for fear of making the class sound like a sea of Babel, and (2) I have to define what I use. Today, for example, I found myself wanting to say "hard-coded", as such as a constant hard-coded into a method. I caught myself and tried to relate it to what we were doing, so that students would know what I meant, both now and later.

    I often speak with friends and colleagues who teach a lot of CS as trainers in industry. I wonder if they ever get a chance to teach a CS1 course or something like it. The experience is quite different for me from teaching even a new programming style to sophomores and juniors. There, I can take so much for granted, and focus on differences. But for my intro student the difference isn't between two somethings, but something and nothing.

    However, I also think that we have overglamorized how difficult it is to learn to program. I am not saying that learning to program is easy; it is tough, with ideas and abstractions that go beyond what many students encounter. But I think that sometimes lure ourselves into something of a Zeno's paradox: "This concept is so difficult to learn; let's break it down into parts..." Well, then that part is so difficult to learn that we break it down into parts. Do this recursively, ad infinitum, and soon we have made things more difficult than they really are -- and worse, we've made them incredibly boring and devoid of context. If we just work from a simple context, such as media computation, we can use the environment to guide us a bit, and when we reach a short leap, we make it, and trust our students to follow. Answer questions and provide support, but don't shy away from the idea.

    That's what I'm thinking this afternoon at least. Then again, it's only the end of our second week of classes!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    August 18, 2006 4:49 PM

    More on "Agile Teaching"

    If you've read many of my entries on agile software development, you know that I run the risk of being one of those guys who see agile principles everywhere I look in the world. I've certainly explored the relationship between agile approaches and my running. I've even seen agile truths a Bill Murray film.

    One way to do an analogy, and the people who use it, a disservice is take it too far, to go beyond where it adds value to where it misleads. But sometimes when we see something everywhere it's because it is everywhere, in some form. What's the pattern?

    I was again struck by the similarities between agile software development and teaching techniques while reading an article recently. Another department head in my college shared some of the papers she has been reading on how to improve the way we teach large, general-education courses. One of these papers is a product of the National Center for Academic Transformation, a project based on two ideas:

    • using scientific evidence from cognitive psychology and learning research to design instruction more effectively, and then
    • using information technology to implement instructional designs more efficiently and cost-effectively.
    One of the papers, Improving Quality and Reducing Cost: Designs for Effective Learning by Carol Twigg, motivated the ideas behind the project and described some of the hallmarks of courses redesigned according to them.

    The project currently focuses on the largest of courses in order to maximize its effect. According to Twigg, "just 25 courses generate about half of all student enrollments in community colleges and about a third of all enrollments in four-year institutions". This had never occurred to me but isn't too surprising, given that nearly all colleges students at every school take a common set of introductory courses in English, mathematics, science, and business. What did surprise me was the typical failure rates in these courses: 15% at research universities, 30-40% at comprehensive universities, and 50-60% at community colleges. Some of these failures are certainly the result of filtering out students who shouldn't be in college, but I have to think that a significant percentage of the failures are by people who have been poorly served by a lecture-only section of 100s of people.

    What does agile software development have to do with all this? In order to improve the quality of instruction (including the failure rates in these large and common courses) and reduce the cost of teaching them (in a climate of declining funding and rising expenses), this paper recommends that universities change the way they design and teach university courses -- in ways that echo how agile developers work, and using information technology to automate whatever automation can improve.

    One of the fundamental principles of this transformation is continuous assessment and feedback. Rather than testing students only with a midterm and a final, an instructor should keep in continuous touch with how students are comprehending the material. The traditional way to do this is to administer frequent quizzes, but that approach has as many minuses as plusses. Grading all those quizzes is a pain no sane instructor relishes, and you end up using a lot of class time taking quizzes.

    Technology can help here, if we think about automating continuous assessment and feedback. On-line quizzes can give students an opportunity to test their understanding frequently and receive feedback about what they know and don't know, and they can provide the instructor with feedback about both individuals and the group. Other technology can be more continuous yet, allowing instructors to quiz students "on the fly" and receive aggregated results immediately, a la Who Wants to be a Millionaire? Folks at my university have begun to use interactive feedback tools of this sort, but they haven't made their way into my department yet. Our most common immediate assessment-and-feedback tool is still the compiler.

    But the agile programmers -- and agile-thinking instructors -- among us know all about automating continuous assessment and feedback, and use more tools: IDEs that provide immediate compilation of code ... unit testing frameworks. Red bar, green bar! These tools help students know right where they are all the time, getting past some of the ticky-tack syntax issue that unnecessarily interfere with new students' progress. I think it's a huge win to let the IDE point out "you forgot a semicolon..." and "your code doesn't pass this test...".

    There is a risk in allowing students to develop a "do it 'til you get it right" mindset, but CS folks have already had to deal with this with the easy availability of compilers. Two years ago -- almost exactly! -- I wrote about this issue of multiple iterations and care for programs. Many professors still don't like that students can and do go through many compile iterations. Since that time, I've become even further convinced that students should do this, but learn how to do it right. People master skills by careful repetition, so we need to let them use repetition -- carefully, thoughtfully. In some ays, this seems like a statistical problem. To succeed by making random tries but never learning, they need to have time for a lot of tries. If they have that much time, then they have too little to do!

    The rest of the paper outlines other agile-sounding techniques: increased student interaction (pair programming), continuous support (the coach, and access to the working system), and sharing resources (collective ownership). But the key is feedback, and the use of automation to do this as cleanly and painlessly as possible.

    Ultimately, I just like the mentality these folks have about teaching. Twigg quotes a math prof involved with the project as saying, "Students learn math by doing math, not by listening to somebody talking about doing math." Of course! But you'd never know it by looking at most university courses.

    This sort of re-design requires a big change in how instructors behave, too, and that change is harder to effect than the one in students. Instructors are no longer primarily responsible for introducing basic material. (Isn't that what reading is for?) Instead, they need to be able to review what students have done and expand on it in real-time, extending what students do and leading them to new ideas and to integration of ideas. That's intimidating to most of us and so requires a whole new brain.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    August 12, 2006 2:21 PM

    Grades and Verticality

    The next book on my nightstand is Talking About Leaving: Why Undergraduates Leave the Sciences, by Elaine Seymour and Nancy Hewitt. The dean and department heads in my college have had an ongoing discussion of national trends in university enrollment in mathematics and the sciences, because all of our departments with the exception of biology have seen their number of majors drop in recent years. If we can understand the issue better, perhaps we can address it. Are students losing interest in the sciences in college? In high school? In grade school? Why? One of the common themes on this blog for the last year or so has been apparent declining interest in CS among students both at the university and in the K-12 system. At this point, all I know is that this is a complex problem with a lot of different components. Figuring out which components play the largest role, and which ones we as university faculty and as citizens can affect, is the challenge.

    My net acquaintance Chad Orzel, a physicist, recently commented on why they're leaving, drawing his inspiration from an Inside Higher Ed piece of the same name. That article offers four explanations for why students leave the physical sciences and engineering disciplines: lower GPAs, weed-out courses, large and impersonal sections, and "vertical curricula". The last of these refers to the fact that students in our disciplines often have to "slog" through a bunch of introductory skills courses before they are ready to do the intersting work of science.

    While Chad teaches at a small private college, my experience here at a mid-size public university seems to match with his. We don't have many large sections in computer science at UNI. For a couple of years, my department experimented with 100-person CS1 sections as a way to leverage faculty expertise in a particular language against our large enrollments. (Irony!) But for the most part our sections have always been 35 or less. I once taught a 53-person section of our third course (at that time, object-oriented programming), but that was an aberration. Our students generally have small sections with plenty of chance to work closely with the tenure-track faculty who teach them.

    We've never had a weed-out course, at least not intentionally. Many of our students take Calculus and may view that as a weeder, but my impression from talking to our students is that this is nothing like the brutal weed-out courses used in many programs to get enrollments down to a manageable size of sufficient quality. These days, the closest thing we have to a weed-out course is our current third course, Data Structures. It certainly acts as a gatekeeper, but it's mostly a matter of programming practice; students who apply themselves to the expectations of the instructor ought to be able to succeed.

    The other two issues are problems for us. The average GPA of a computer science student is almost surely well below the university average. I haven't seen a list of average GPAs sorted by department in many years, but the last few times I did CS and economics seemed to be jostling for the bottom. These are not disciplines that attract lots and lots of weak students, so grading practices in the departments must play a big role. As the Inside Higher Ed article points out, This culture of grading is common in the natural sciences and the more quantitative social sciences at most universities. I don't doubt that many students are dissuaded from pursuing a CS major by even a B in an intro course. Heck, they get As in their other courses, so maybe they are better suited for those majors? And even the ones who realize that this is an illogical deduction may figure that their lives will simply be easier with a different major.

    I won't speak much of the other problem area for us, because I've written about it a lot recently. I've never used the word "vertical" to describe our problem of boring intro courses that hide or kill the joy of doing computing before students ever get to see it, but I've certainly written about the issue. Any student who graduates high school with the ability to read is ready for a major in history or communication; the same student probably needs to learn a programming language, learn how to write code, and figure out a lot of new terminology before being ready to "go deep" in CS. I think we can do better, but figuring out how is a challenge.

    I must point out, though, that the sciences are not alone in the problem of a vertical curriculum. As an undergraduate, I double-majored in CS and accounting. When I switched from architecture to CS, I knew that CS was what I wanted to do, but my parents encouraged me to take a "practical" second major as insurance. I actually liked accounting just fine, but only because I saw past all of the bookkeeping. It wasn't until I got to managerial accounting as a junior and auditing as a senior that I got to the intellectually interesting part of accounting, how one models an organization in terms of its financial system in order to understand how to make it stronger. Before that came two years of what was, to me, rather dull bookkeeping -- learning the "basics" so that we could get to the professional activities. I often tell students today that accounting is more interesting than it probably seems for the first one, two, or three years.

    Computer science may not have moved much faster back then. I took a one-quarter CS 1 to learn how to program (in Fortran), a one-quarter data structures course, and a couple of courses in assembly language, job control language, and systems programming, but within three or four quarters I was taking courses in upper-division content such as databases, operating systems, and programming languages -- all of which seemed like the Real Thing.

    One final note. I actually read the articles mentioned at the beginning of this essay after following a link from another piece by Chad, called Science Is Not a Path to Riches. In it, Chad says:

    A career in research science is not a path to riches, or even stable employment. Anyone who thinks so is sadly deluded, and if sure promotion and a fat paycheck are your primary goal (and you're good at math), you should become an actuary or an accountant or something in that vein. A career in research science can be very rewarding, but the rewards are not necessarily financial (though I hasten to add, I'm not making a bad living, either).

    This is one place where we differ from physicists and chemists. By and large, CS graduates do get good jobs. Even in times of economic downturn, most of our grads do pretty well finding and keeping jobs that pay above average for where they live. Our department is willing to advertise this when we can. We don't want interested kids to turn away because they think they can't get a job, because all the good jobs are going to India.

    Even still, I am reluctant to over-emphasize the prospect of financial reward. For one thing, as the mutual fund companies all have to tell us, "past performance is no guarantee of future results". But more importantly, intrinsic interest matters a lot, too, perhaps more so than extrinsic financial reward, when it comes to finding a major and career path that works. I'd also like to attract kids because CS is fun, exciting, and worth doing. That's where the real problem of 'verticality' comes in. We don't want kids who might be interested to turn away because the discipline looks like a boring grind.

    I hope to learn more about this systemic problem from the empirical data presented in Talking About Leaving, and use that to figure out how we can do better.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    July 26, 2006 2:27 PM

    A Classic David Gries Article on Intro Courses

    Summer is a great time to read old papers, though this summer has been too light on free reading time for my tastes. This morning, I read David Gries's 1974 SIGCSE paper What Should We Teach in an Introductory Programming Course?, which came up in a SIGCSE mailing list discussion earlier this year. Gries has long been involved in the effort to improve computer science instruction, from at least his 1973 PL/I-based intro text with Conway, An Introduction to Programming up to his recent multimedia effort Program Live!, with his son, Paul. In part because his work tends to be more formalist than my own, I usually learn a lot from his writing.

    This paper focuses on two of what Gries thinks are the three essential aspects of an intro course: how to solve problems and how to describe solutions algorithmically. His discussion of general problem solving was the reason the paper came up in the SIGCSE discussion, which considered how best to teach students "general problem-solving skills". Gries notes that in disciplines such as philosophy one studies problem-solving methods as philosophical stances, but not as processes to try in practice, and that in subjects such as mathematics one learns how to solve specific classes of problems by dint of repetition. By exposing students to several classes of problem, teachers hope that they generalize their learning into higher-order skills. This is true in many disciplines.

    But we in computer science face a more challenging problem. We strive to teach our students how to write programs that solve any sort of problem, whether from business or science or education. In order to program well across all the disciplines, our students must learn more general problem-solving skills.

    That said, I do not think that we can teach general problem-solving skills in our intro course, because I do not think the brain works that way. A lot of evidence from cognitive psychology shows that humans learn and tend to operate in specific contexts, and expertise transfers across boundaries only with great attention and effort. The the real question is not how we can teach general problem-solving skills in our intro course, but rather how can we help students develop general problem-solving skills over the course of our curriculum. My current belief is that we should teach programming in context, while opportunistically pointing out the patterns that students see and will see again later. Pick a context that students care about, so that they will have plenty of their own motivation to succeed. Then do it again, and again. By making explicit the more general patterns that students see, I think that we can do better than most CS and math curricula currently do. We would not be teaching general problem-solving skills in any course so much as creating an environment in which students can maximize their likelihood of "getting it". Our courses should not be weed-out courses, because I think most students can get it -- and we need them to, whether as CS majors or as non-majors prepared to participate in a technological society.

    When Gries looked at what people were teaching in intro CS courses back then, he found them doing what many of our courses do now: describing a set of tools available in a programming language, showing students a few examples, and then sending them off to write programs. Not much different than our math brethren teaching differentiation or integration. Gries exposes the folly in this approach with a little analogy:

    Suppose you attend a course in cabinet making. The instructor briefly shows you a saw, a plane, a hammer, and a few other tools,letting you use each one for a few minutes. He next shows you a beautifully-finished cabinet. Finally, he tells you to design and build your own cabinet and bring him the finished product in a few weeks.

    You would think he was crazy!

    (For some reason, this passage reminded me of a witty student evaluation that Rich Pattis shared in his SIGCSE keynote address.)

    Gries offers a few general principles from the likes of Descartes, Polya, and Dijkstra and suggests that we might teach them to students, that they might use the principles to help themselves organize their thoughts in the midst of writing a complex program. I suspect that their greatest value is in helping instructors organize their thoughts and keep the ultimate goal of the course in mind. For example, while reading this section, I was struck by Polya's fourth phase of solving problems: "Look back". We instructors must remember to give our students both the opportunity to look back at what they've done and think about their process and product, and the time to consolidate their learning. So often, we are driven by the need to cover more, more, more material, and the result is a treadmill from which students fall at the end of the course, exhausted and not quite sure what all just happened.

    Gries then offers a language for describing algorithms, very much in sync with the Structured Programming movement of the 1970s. My primary reaction to this discussion was "where are the higher-level patterns?" If all we teach students are basic statements, procedure definitions and calls, and control structures, we are operating only barely above the level of the programming language -- and thus leaving students to discover on their own fundamental patterns like Guarded Linear Search.

    What language should we teach in the intro course? Gries attacks this question with gusto. As I've written before, language sniping is a guilty pleasure of mine, and Gries's comments are delightful. Indeed, this whole paper is written in a saucy style not often seen in academic papers, then and especially now. I wonder if most SIGCSE referees would let such style pass these days?

    First, Gries reminds us that in an intro the programming language is only a vehicle for teaching the ideas we think are important. It should be as close as possible to the way we want students to think about programs, but also "simple, elegant, and modular, so that features and concepts not yet taught won't get in the way." (As a smug language weenie, I am already thinking Smalltalk or Scheme...) But we usually teach the wrong language:

    The language taught is often influenced by people outside the computer science profession, even though their opinions are not educated enough to deserve recognition.

    At the time he wrote the paper, Gries would like to have taught Pascal, BLISS, or a variant of Algol, but found that most departments taught Fortran, BASIC, or PL/I. Gries minces no words. On Fortran:

    Fortran is out of date and shouldn't be used unless there is absolutely nothing else available. If this is the case, use it under protest and constantly bombard the manufacturers or other authorities with complaints, suggesting the make available a more contemporary.

    I learned Fortran in my CS 1 course back in 1983!

    On Basic:

    [It] should never have come into existence. When it was contemplated, its designers should have done their research to see what programming and programming languages are all about before plunging in.

    I learned Basic as my first programming language in high school, in 1980!

    But then Gries expresses one of the tenets of his approach that I disagree with:

    I have doubts about teaching students to think "on-line"; algorithms should be designed and written slowly and quietly at one's desk. Only when assured of correctness is it time to go to the computer and test the algorithm on-line.

    First, notice the older sense of "on-line". And then ask yourself: Is this how you program, or how you want to program? I know I'm not smart enough to get any but the most trivial programs correct without testing them on-line.

    Of the choices realistically available, Gries decided that PL/I was the best alternative and so wrote his text with with Conway. I used the 1979 version of this text as the PL/I reference in my data structures course back in 1983. (It was the language companion to the language-independent Standish text I so loved.)

    Even though Gries grudgingly adopted PL/I, he wasn't happy:

    What's wrong with PL/I? Its syntax is enough to offend anyone who has studied English grammar; its data structure facilities ... could have been less clumsy ...; it is not modular ...; its astonishment factor is much too high (e.g., what is 25 + 1/3 ?); ... and so on.

    But with the right subset of the language, Gries felt he could teach structured programming effectively. That is just the sort of compromise that C++ advocates made in the 1990s. I willingly accepted C++ as a CS 1 language myself, though it didn't take long for me to realize that this was a mistake. By comparison, PL/I was a relatively nice language for novices.

    The last section of Gries's paper turns to the topic of program documentation. His central tenet will sound familiar to agile programmers of the new century:

    Program documentation should be written while the program is being written, if not before, and should be used by the programmer in proving correctness and in checking his program out.

    This is a fine argument for test-driven development! This is a common theme among formalist computer scientists, and one I've written about with regard to Edsger Dijkstra. The place where the agile folks diverge from folks like Gries and Dijkstra is in their strong conviction that we should use code -- executable tests -- to document the intended behavior of the system. If the documentation is so valuable, why not write it in a form that supports automated application and continuous feedback? Sitting quietly at one's desk and writing an outline of the intended algorithm by candlelight seems not only quaint but also sub-optimal.

    The outline form that Gries recommends is certainly more palatable than other alternatives, such as flowcharts. I think that the modern rendition of this approach is Matthias Felleisen's design recipe approach, described most completely in How to Design Programs. I have great respect for this work and am always looking for ways to use its ideas to improve how I teach.

    Gries concludes his paper with good advice on "practicing what you teach" for any instructor:

    The students will easily sense whether you believe in what you tell them, and whether you yourself practice what you teach.

    He wrote this at a time when many CS instructors needed to be retrained to teach the new orthodoxy of structured programming. It has been equally true over the last ten years or so, with the move to object-oriented programming and then agile approaches. One of the reasons I dislike textbooks these days is that too often I get the sense that the authors don't really believe what they are saying or, if they do, that the belief is more a skin they've put on than a deep grokking of the ideas. Gries advises ways in which to deepen one's understand, including the surprisingly surprising "write several programs, both large and small, using the tools and techniques advocated". Why should this be surprising to anyone? I don't know, but I wonder how many folks who now teach an OO intro course have ever written and lived inside a large object-oriented program.

    The end of this paper supports a claim I made about academic conservatism a couple of years ago, and brings us back to Dijkstra again. First he expresses hope:

    You would think that the University, where one searches for truth and knowledge, would be the place for innovative thinking, for people are tuned to new and better ideas.

    ... and then he quotes Dan McCracken, who had surveyed academics about their intro courses:

          "Nobody would claim that Fortran is ideal for anything, from teachability, to understandability of finished programs, to extensibility. Yet it is being used by a whopping 70% of the students covered by the survey, and the consensus among the university people is that nothing is going to change much anytime soon."

    Does this sound like educators who are committed to teaching concepts, to teaching people what they need to know to prepare for the future?

    As noted above, my alma mater was still teaching Fortran in CS 1 into the 1980s. Gries is hard on his colleagues, as I have been at times, but the truth is that changing how one programs and teaches is hard to do. And he and I have been guilty of making PL/I-like compromises, too, as we try to introduce our own ideas. The lesson here is not one of blame but one of continually reminding ourselves of what matters and trying to make those things happen in our courses.

    Reading this paper was a nice diversion from the other duties I've been facing lately, and a nice way to start preparing more for my fall CS 1 course.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    July 10, 2006 5:33 PM

    A Grading Experiment?

    While cleaning up one of my current stuff folders (now only 103 files in 1.7 Mb, positively tiny!), I ran across this quote from Ernie at 3D Pancakes:

    That's one of the reasons for my "I don't know" policy -- an answer of "I don't know" on any homework or exam question is worth 25% partial credit. A blank response doesn't count; to get the partial credit, you must explicitly acknowledge your ignorance. (The other reason, of course, is that it cuts way back on random nonsense maybe-I'll-get-pity-credit-for-stumbling-on-the-right-keywords answers, which makes grading much easier.)

    Excellent idea... Anyone who has ever graded exams with open-ended questions knows just what Ernie is talking about. Some students will write anything down to bluff their way to a few points, and the time the grader spends seeing through the smoke (or not!) is much better spent on almost anything else. The more open-ended the problem or question, the more likely that the student's tangential dump will look just enough like a real answer that it requires extra attention.

    Before I would use this strategy, I think I would add a phrase to the required answer: "... but I will by Friday." This transforms the answer from just an admission into a promise to learn. This turns what can be a dispiriting experience -- a complete blank on an exam question -- into a chance to get better.

    Of course, with that phrase, I would reserve the right to ask the student for the answer later, by e-mail. This adds an element of accountability to the equation and might encourage students to take their admission more seriously. (With the right set of students, especially in a junior/senior course, I might want the right to ask for the answer in class!)

    Hmmm...

    I'm guessing that the students in my fall course would probably prefer that I not browse my stuff folders, if this is what happens when I do.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    July 08, 2006 3:58 PM

    Driving Students Away

    Back in January, I wrote about making things worse in the introductory course, which had been triggered by an article at Uncertain Principles. The world is circling back on itself, because I find myself eager to write about how we drive students away from our intro courses, again triggered by another article at Uncertain Principles. This time, Chad's article was itself triggered by another physicist's article on driving students away. Perhaps the length of the chain grows by each time it comes around...

    According to these physicists, one of the problems with intro course in physics is that it is too much like high school physics, which bores the better students to the point that they lose interest. We don't face that issue in CS much these days, at least in my neck of the woods, because so few of our freshmen enter the program with any formal CS or programming education. I'm not a fan of the approaches suggested to keep the attention of these well-prepared students (a byzantine homework policy, lots of quizzes) because I think that repeating material isn't the real problem. And these approaches make the real problem worse.

    The real problem they describe is one with which we are familiar: students "lose sight of the fun and sense of wonder that are at the heart of the most successful scientific careers". The intro physics course...

    ... covers 100's years of physics in one year. We rarely spend more than a lecture on a single topic; there is little time for fun. And if we want to make room for something like that we usually have to squeeze out some other topic. Whoosh!

    Chad says that this problem also affects better students disproportionately, because they "have the preparation to be able to handle something more interesting, if we could hold their attention".

    I think most students can handle something more interesting. They all deserve something more interesting than we usually give them, too.

    And I don't think that the answer involves "more content". Whenever I talk to scientists about the challenges of teaching, the conversation always seems to turn to how much content we have to deliver. This attitude seems wrongheaded to me when taken very far. It's especially dangerous in an introductory course, where novices can easily drown in syntax and detail -- and lose sight of what it is like to be a scientist, or an engineer. Pouring on more content, even when the audience is honors students, almost always results in suboptimal learning, because the course tends to become focused on data rather than ideas.

    In closing, I did enjoy seeing that academic physicists are now experimenting with courses about something more than the formulas of physics. One of the commenters on the Uncertain Principles article notes that he is tweaking a new course design around the question, "How old is the universe?" He also mentions one of the obstacles to making this kind of change: students actually expect a memorization-driven course, because that's what they've learned from their past experiences. This is a problem that really does affect better students differently, because they have mastered the old way of doing things! As a result, some of them will resent a new kind of course. My experience, though, is that you just have to stick to your approach through some rough patches early; nearly all of these students will eventually come around and appreciate the idea- and practice-driven approach even more once they adapt to the change. Remember, adaptation to change takes time, even for those eager to to change...


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    June 29, 2006 11:47 AM

    Buried Treasures

    I finally got around to reading Glenn Vanderburg's Buried Treasure. The theme of his article is "our present is our past, and there's more past in our future". Among his conclusions, Glenn offers some wisdom about programming languages and software education that we all should keep in mind:

    What I've concluded is that you can't keep a weak team out of trouble by limiting the power of their tools. The way forward is not figuring out how to achieve acceptable results with weak teams; rather, it's understanding how to build strong teams and how to train programmers to be part of such teams.

    Let's stop telling ourselves that potential software developers can't learn to use powerful tools and instead work to figure out how to help them learn. Besides, there is a lot more fun in using powerful tools.

    Glenn closes his article with a nascent thought of his, an example of how knowing the breadth of our discipline and its history might help a programmer solve a thorny problem. His current thorny problem involves database migration in Rails, and how that interacts with version control. We usually think of version control as tracking static snapshots of a system, but a database migration subsystem is itself a tracking of snapshots of an evolving database schema -- so your version control system ends up tracking snapshots of what is in effect a little version control system! Glenn figures that maybe he can learn something about solving this problem from Smalltalkers, who deal with this sort of this thing all the time -- because their programs are themselves persistent objects in an evolving image. If he didn't know anything about Smalltalk or the history of programming languages, he might have missed a useful connection.

    Speaking of Smalltalk, veteran Smalltalker Blaine Buxton wrote recently on a theme you've seen here: better examples. All I can say is what Blaine himself might say, Rock on, Blaine! I think I've found a textbook for my CS 1 course this fall that will help my students see lots of more interesting examples than "Hello, world!" and Fibonacci numbers.

    That said, my Inner Geek thoroughly enjoyed a little Scheme programming episode motivated by one of the comments on this article, which taught me about a cool feature of Fibonacci numbers:

    Fib(2k) = Fib(k) * (Fib(k+1) + Fib(k-1))

    This property lends itself to computing Fib very efficiently using binary decomposition and memoizing (caching perviously computed values). Great fun to watch an interminably slow function become a brisk sprinter!

    As the commenter writes, simple problems often hide gems of this sort. The example is still artificial, but it gives us a cool way to learn some neat ideas. When used tactically and sparingly, toy examples open interesting doors.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    June 18, 2006 10:20 AM

    Programming as Program Transformation

    I see that Ralph Johnson is giving the Friday keynote talk at ECOOP 2006 this year. His talk is called "The Closing of the Frontier", and the abstract shows that it will relate to an idea that Ralph has blogged about before: software development is program transformation. This is a powerful idea that has emerged in our industry over the last decade or so, and I think that there are a lot of computer scientists who have to learn it yet. I have CS colleagues who argue that most programs are developed essentially from scratch, or at least that the skills our students most need to learn most closely relate to the ability to develop from scratch.

    I'm a big believer in learning "basic" programming skills (most recently discussed here), but I'd like for my students to learn many different ways to think about problems and solutions. It's essential they learn that, in a great many contexts, "Although user requirements are important, version N+1 depends more on version N than it does on the latest requests from the users."

    Seeing Ralph's abstract brought to mind a paper I read and blogged about a few months back, Rich Pattis's "A Philosophy and Example of CS-1 Programming Projects". That paper suggested that we teach students to reduce program specs to a minimum and then evolve successive versions of a program which converges on the program that satisfies all of the requirements. Agile programming for CS1 back in 1990 -- and a great implementation of the notion that software development is program transformation.

    I hope to make this idea a cornerstone of my CS1 course this fall, with as little jargon and philosophizing as possible. If I can help students to develop good habits of programming, then their thoughts and minds will follow. And this mindset helps prepare students for a host of powerful ideas that they will encounter in later courses, including programming languages, compilers, theory, and software verification and validation.

    I also wish that I could attend ECOOP this year!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    June 14, 2006 3:49 PM

    Picking a Textbook for Fall

    I've come to realize something while preparing for my fall CS1 course.

    I don't like textbooks.

    That's what some people call a "sweeping generalization", but the exceptions are so few that I'm happy to make one.

    For one thing, textbooks these days are expensive. I sympathize with the plight of authors, most of whom put in many more hours than book sales will ever pay them for. I even sympathize with the publishers and bookstores, who find themselves living in a world with an increasingly frictionless used-book market, low-cost Internet-based dealers, and overseas sellers such as Amazon India. But none of this sympathy changes the fact that $100 or more for a one-semester textbook -- one that was written specifically not to serve as a useful reference book for later -- is a lot. Textbook prices probably have not risen any faster than the rate of tuition and room and board, but still.

    Price isn't my real problem. My real problem is that I do not like the books themselves. I want to teach my course, and more and more the books just seem to get in the way. I don't like the style of the code shown to students. I don't like many of the design ideas they show students. I don't like all the extra words.

    I suppose that some may say these complaints say more about me than about the books, and that would be partly true. I have some specific ideas about how students should learn to program and think like a computer scientist, and it's not surprising that there aren't many books that fit my idiosyncrasy. Sticking to the textbook may have its value, but it is hard to do when I am unhappy at the thought turning another page.

    But this is not just me. By and large, these books aren't about anything. They are about Java or C++ or Ada. Sure, they may be about how to develop software, too, but that's an inward-looking something. It's only interesting if you are already interested in the technical trivia of our discipline.

    This issue seems more acute for CS 1, for a couple of reasons. First, one of the goals of that course is to teach students how to program so that they can use that skill in later courses, and so they tend toward teaching language. More important is the demand side of the equation, where the stakes are so high. I can usually live with one of the standard algorithms books or compilers books , if it gives students a reasonable point of view and me the freedom to do my own thing. In those cases, the book is almost a bonus for the students. (Of course, then the price of the book becomes more distasteful to students!)

    Why use a text at all? For some courses, I reach a point of not requiring a book. Over the last decade or more, I have evolved a way of teaching Programming Languages that no longer requires the textbook with which I started. (The textbook also evolved away from our course.) Now, I require only The Little Schemer, which makes a fun, small, relatively inexpensive contribution to how my students learn functional programming. After a few times teaching Algorithms, I am getting close to not needing a textbook in that course, either.

    I haven't taught CS 1 in a decade, so the support of a strong text would be useful. Besides, I think that most beginning students find comfort at least occasionally in a text, as something to read when today's lecture just didn't click, something to define vocabulary and give examples.

    Introduction to Computing ... A Multimedia Approach

    So, what was the verdict? After repressing my true desires for a few months in the putative interest of political harmony within the department, yesterday I finally threw off my shackles and chose Guzdial and Ericson's Introduction to Computing and Programming with Java: A Multimedia Approach. It is relatively small and straightforward, though a still a bit expensive -- ~ $90. But I think it will "stay out of my way" in the best sense, teaching programming and computing through concrete tasks that give students a chance to see and learn abstractions. Perhaps most important, it is about something, a something that students may actually care about. Students may even want to program. This book passes what I call the Mark Jacobson Test, after a colleague who is a big believer in motivation and fun in learning: a student's roommate might look over her shoulder one night while she's doing some programming and say, "Hey, that looks cool. Whatcha doing?"

    Let's see how it goes.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    June 11, 2006 2:05 PM

    Pleasantly Surprising Interconnections

    The most recent issue of the Ballast Quarterly Review, on which I've commented before, came out a month or so. I had set it aside for the right time to read and only came back around to it yesterday. Once again, I am pleasantly surprised by the interconnectedness of the world.

    In this issue, editor Roy Behrens reviews John Willats's book Making Sense Of Children's Drawings. (The review is available on-line at Leonardo On-Line.) Some researchers have claimed that children draw what they know and that adults draw what they see, and that what we adults think we see interferes with our ability to create authentic art. Willats presents evidence that young children draw what they see, too, but that at that stage of neural development they see in an object-centered manner, not a viewer-centered manner. It is this subjectivity of perspective that accounts for the freedom children have in creating, not their bypassing of vision.

    The surprising connection for came in the form of David Marr. A vision researcher at MIT, Marr had proposed the notion that we "see by processing phenomena in two very distinct ways", which he termed viewer-centered object-centered. Our visual system gathers data in a viewer-centered way and then computes from that data more objective descriptions from which we can reason.

    Where's the connection to computer science and my experience? Marr also wrote one of the seminal papers in my development as an artificial intelligence researcher, his "Artificial Intelligence: A Personal View". You can find this paper as Chapter 4 in John Haugeland's well-known collection Mind Design and on-line as a (PDF) at Elsevier.

    In this paper, Marr suggested that the human brain may permit "no general theories except ones so unspecific as to have only descriptive and not predictive powers". This is, of course, not a pleasant prospect for a scientist who wishes to understand the mind, as it limits the advance of science as a method. To the extent that the human mind is our best existence proof of intelligence, such a limitation would also impinge on the field of artificial intelligence.

    I was greatly influenced by Marr's response to this possibility. He argued strongly that we should not settle for incomplete theories at the implementation level of intelligence, such as neural network theory, and should instead strive to develop theories that operate at the computational and algorithmic levels. A theory at the computational level captures the insight into the nature of the information processing problem being addressed, and a theory at the algorithmic level captures insight into the different forms that solutions to this information processing problem can take. Marr's argument served as an inspiration for the work of the knowledge-based systems lab in which I did my graduate work, founded on the earlier work on the generic task model of Chandrasekaran.

    Though I don't do research in that area any more, Marr's ideas still guide how I think about problems, solutions, and implementations. What a refreshing reminder of Marr to encounter in light reading over the weekend.

    Behrens was likely motivated to review Willats's book for the potential effect that his theories might have on the "day-to-day practice of teaching art". As you might guess, I am now left to wonder what the implications might be for teaching children and adults to write programs. Direct visual perception has less to do with the programs an adult writes, given the cultural context and levels of abstraction that our minds impose on problems, but children may be able to connect more closely with the programs they write if we place them in environments that get out of the way of their object-centered view of the world.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

    June 10, 2006 3:29 PM

    Students, Faculty, and the Internet Age

    I've been meaning to write all week, but it turned out to be busy. First, my wife and daughters returned from Italy, which meant plenty of opportunity for family time. Then, I spent much of my office week writing content for our new department website. We were due for a change after many years of the same look, and we'd like to use the web as a part of attracting new students and faculty. The new site is very much an early first release, in the agile development sense, because I still have a lot of work to do. But it fills some of our needs well enough now, and I can use bits and pieces of time this summer to augment the site. My blogging urge was most satisfied this week by the material I assembled and wrote for the prospective students section of the site. (Thanks to the Lord of the Webs for his design efforts on the system.)

    Werner Vogels

    I did get a chance to thumb through the May issue of ACM Queue magazine, where I read with some interest the interview with Werner Vogels, CTO of Amazon. Only recently I had been discussing Vogels as a potential speaker for OOPSLA this or some year soon. I've read enough of Vogels's blog to know that he has interesting things to say.

    At the end of the interview, Vogels comments on recruiting students and more generally on the relationship of today's frontier IT firm to academia. First, on what kind of person Amazon seeks:

    The Amazon development environment requires engineers and architects to be very independent creative thinkers. We are building things that nobody else has done before, so you need to be able to think outside the box. You need to have a strong sense of ownership, because in the small teams in which you will work at Amazon, your colleagues will count on you to pull your weight -- especially when it comes to operating the service that you have built. Can you take responsibility for making this the best it can be?

    Many students these days hear so much about teamwork and "people" skills that they sometimes forget that every team member has to be able to contribute. No one wants a teammate who can't produce. Vogels stresses this upfront. To be able to contribute effectively, each of us needs to develop a set of skills that we can use right now, as well as the ability to pick up new skills with some facility.

    I'd apply the same advice to another part of Vogels's answer. In order to "think outside the box", you have to start with a box.

    Vogels then goes on to emphasize how important it is for candidates to "think the right way about customers and technology. Technology is useless if not used for the greater good of serving the customer." Sometimes, I think that cutting edge companies have an easier time cultivating this mindset than more mundane IT companies. A company selling a new kind of technology or lifestyle has to develop its customer base, and so thinks a lot about customers. It will be interesting to see how companies like Yahoo!, Amazon, and Google change as they make the transition into the established, mainstream companies of 2020.

    On the relationship between academia and industry, Vogels says that faculty and Ph.D. students need to get out into industry in order to come into contact with "the very exciting decentralized computing work that has rocked the operating systems and distributed systems world in the past few years". Academics have always sought access to data sets large enough for them to test their theories. This era of open source and open APIs has created a lot of new opportunities for research, but open data would do even more. Of course, the data is the real asset that the big internet companies hold, so it won't be open in the same way for a while. Internships and sabbaticals are the best avenue open for academics interested in this kind of research these days.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    June 01, 2006 9:44 AM

    Programming as Discovery and Expression

    The RPG Dreamsongs quill

    Recently I pointed you to Dick Gabriel's The Art of Lisp & Writing, which I found when I looked for a Gabriel essay that discussed triggers and Richard Hugo. I must confess that I recommended Dick's essay despite never having read it; then again, I've never been disappointed by one of his essays and figured you wouldn't be, either.

    I read the essay over tortellini last night, and I wasn't disappointed. I learned from his discussion of the inextricable partners in creation, discovery and presentation. I learned about mapmakers and how their job is a lot like an engineer -- and a lot like a writer.

    Most of exploration is in the nature of the locally expected: What is on the other side of that hill is likely to be a lot like what's on this side. Only occasionally is the explorer taken totally by surprise, and it is for these times that many explorers live. Similarly for writers: What a writer thinks up in the next minute is likely to be a lot like what is being thought this minute -- but not always: Sometimes an idea so initially apparently unrelated pops up that the writer is as surprised as anyone. And that's why writers write.

    As anyone who has ever written a program worth writing will tell you, that is also why programmers program. But then that is Dick's point. Further, he reminds us why languages such as Lisp and Smalltalk never seem to die: because programmers want them, need them.

    Gabriel has been writing about programming-and-writing for many years now, and I think that his metaphor can help us to understand our discipline better. For example, by explaining writing as "two acts put together: the act of discovery and the act of perfecting the presentation", the boundaries of which blur for each writer and for each work, we see in relief one way in which "software engineering" and the methodologists who drive it have gone astray. I love how Dick disdains terms such as "software developer", "software design and implementation". For him, it's all programming, and to call it something else simply obscures a lot of what makes programming programming in the first place.

    Reading this essay crystallized in mind another reason that I think Java, Ada, and C++ are not the best (or even okay) choices for CS 1: They are not languages for discovery. They are not languages that encourage play, trying to solve a hard problem and coming to understand the problem in the process of writing the program that solves it. That's the great of of programming, and it is exactly what novice programmers need to experience. To do so, they need a language that lets them -- helps them? -- both to discover and to express. Java, Ada, and C++ are Serious Languages that optimize on presentation. That's not what novice programmers need, and probably not the pros need, either.

    This essay also may explain the recent rise of Ruby as a Programmer's Programming Language. It is a language for both to discovery and expression.

    As usual, Gabriel has explored a new bit of landscape for me, discovered something of value, and given us a map of the land.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    May 25, 2006 10:12 AM

    Dumbing Down Recipes

    As a part of seeing my wife and daughters off to Italy, I cooked a few special meals for them on Sunday and Monday. One friend suggested that I needn't have bothered, because they will encounter much better food in Italy, but I think that an unusual meal prepared by Dad is still a nice treat -- and my wife loved not having to think about any meals for the last couple of days of packing and preparing. Besides, I'm not too shabby in the kitchen.

    I like to cook. I'm not an accomplished chef, or anything of the sort, just an amateur who like to work in the kitchen and try new things.

    While blanching asparagus for my last and finest effort of the weekend, I remembered an article that ran in our local paper last March under the headline Cookbooks simplify terms as kitchen skills dwindle. It discusses the dumbing down of cookbooks over the last couple of decades because Americans no longer know the common vocabulary of the kitchen. These days, recipes tend not to use words like "blanch", "dredge", or even "saute", "fold", and "braise", for fear that the casual reader won't have any idea what they mean. Cookbooks that buck the trend must provide detailed glossaries that explain what used to be standard techniques.

    In some ways, this is merely a cultural change. People generally don't spend as much time cooking full meals or from scratch these days, and women in particular are less likely than their mothers to carry forward the previous generation's traditional culinary knowledge. That may not be a good or bad thing, just a difference borne out of technology and society. The article even implicates the digital computer, claiming that because kids grow up with computers these days they expect everything, even their cooking, to be fast. Who knew that our computers were partly responsible for the dumbing down of America's kitchen?

    I sometimes think about connections between cooking and programming, and between recipes and programs. Most folks execute recipes, not create them, so we may not be able to learn much about how learning to programming from learning to cook. But the dumbing down of cooking vocabulary is a neat example for how programs work. When a recipe says to "fold" an ingredient into a mixture, it's similar to making a procedure call. Describing this process using different terms does not change the process, only the primitives used to describe the process. This focus on process, description, and abstraction is something that we computer scientists know and think a lot about.

    In a more general teaching vein, I chuckled in my empathy for this cookbook editor:

    "Thirty years ago, a recipe would say, 'Add two eggs,'" said Bonnie Slotnick, a longtime cookbook editor and owner of a rare-cookbook shop in New York's Greenwich Village. "In the '80s, that was changed to 'beat two eggs until lightly mixed.' By the '90s, you had to write, 'In a small bowl, using a fork, beat two eggs,'" she said. "We joke that the next step will be, 'Using your right hand, pick up a fork and...' "

    Students probably feel that way about programming, but I sometimes feel that way about my students...

    ... which bring me back to my day job. I have reason to think about such issues as I prepare to teach CS 1 for the first time in a decade or so. Selecting a textbook is a particular challenge. How much will students read? What kinds of things will they read. How well can they read? That seems like an odd question to ask of college freshmen, but I do wonder about the technical reading ability of the average student who has questionable background in math and science but wants to "program computer games" or "work with computers". Colleagues complain about what they see as a dumbing down of textbooks, which grow in size, with more and more elaborate examples, while in many ways expecting less. Is this sort of text what students really need? In the end, what I think they really need are a good language reference and lots of good examples to follow, both in the practice of programming and in the programs themselves. It's our job to teach them how to read a language reference and programs.

    My selection of a CS 1 textbook is complicated by the particular politics of the first year curriculum in my department. I need something that feels traditional enough not to alienate faculty who are skeptical of OO, but true enough to OO that Java doesn't feel like an unnecessary burden to students.

    blanching vegetables

    Postscript: Many recipes require that vegetables be blanched -- scalded in boiling water a short time -- before being added to a dish. Blanching stops the enzyme action, which allows them to stay crisp and retain their color and flavor. Here is a simple how-to for blanching. I didn't lear this from my mom or any of the cooks in my family (including my dad); I learned it the old-fashioned way: I ran into the term in a recipe, I wanted to know what it meant, so I looked it up in the cookbook. If we could give our programming students the right sort of reference for looking up the terms and ideas they encounter, we would be doing well. Of course, some students of programming will be like some students of cooking and try to fake it. I don't recommend faking the blanching of asparagus -- it's a temperamental vegetable!


    Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

    May 17, 2006 3:10 PM

    Fifteen Compilers in Fifteen Weeks

    Our semester ended a couple of weeks ago. It was busier than usual, for a variety of reasons, but my compiler course was one of the enjoyable experiences. Of course, my students will say that my compiler course was one of the major reasons that their semester was busier than usual.

    A compiler is usually the largest piece of software an undergraduate CS student will write. Some of my students may have written other large programs, in other senior project courses, but I doubt they've written larger, and certainly not in smaller teams. (Most worked in pairs.) Several commented that they learned more just trying to write such a large program than they had in any of my courses. That doesn't surprise... It's one of the main reasons that I think it's essential for every undergrad to take a project course of this sort. There is no substitute.

    After watching my students struggle to meet deadlines and to deliver assigned functionality, I've been thinking about using agile methods to write a compiler. Even with occasional suggestions from me for how to manage their projects, students fell behind. It is just too easy for students to fill their time with other, more immediate demands and watch project deadlines such as "table-driven parser, due in three weeks" get closer and closer.

    Then there were times when students made a good-faith effort to estimate how long a task would take them, only to be off by an order of magnitude. The code for the table-driven parser isn't so bad, but, boy, does that table take a long time to build!!

    Throughout the semester, I made several common suggestions, ones that will sound familiar: take small steps through the spec; have unit tests for each element you build, so that you won't be so afraid to make changes. But next time, I think I may go one step farther and make two other agile practices cornerstones of the course: short iterations and small releases. How about a deliverable every week!?

    Figuring out how best to define the stories and then order them for implementation will be the challenge. In my mind's eye, I'd like for each release to create a complete compiler, for some linguistically meaningful definition of "complete". Start with such a subset of the language that we can build a compiler for it with little or no overhead. Then, add to the language grammar bit by bit in ways that slowly introduce the ideas we want to study and implement. Eventually we need a more principled form of scanning; eventually, we need a parser that implements a derivation explicitly. The issues would not have to arise in the traditional front-to-back order of a compiler course; why do we necessarily have to build a fancy parser before implementing a rudimentary run-time system?

    Can this work for something like the parsing table? Adding grammar elements to the language piecemeal can have odd effects on the parsing rules. But I think it's possible.

    And if I seem stuck on the table-driven parser in all of my examples, this may be a result of the fact that I still like for students to implement their parsers by hand. I know that we could go a lot farther, and do a lot more with code generation and optimization, if we built our parsers using a tool such as Yacc, JavaCC, or SableCC. But I can't shake the feeling of the value that comes from implementing a non-trivial parser by hand. Maybe I'm a dinosaur; maybe my feelings will change in the next eighteen months before I teach the course again.

    Though I've been thinking some of these thoughts since last summer, I have to give credit to Jeremy Frens and Andrew Meneely of Calvin College, who presented a paper at this year's SIGCSE called Fifteen Compilers in Fifteen Days (also available from the ACM Digital Library). I'm not sure I'd want to follow their course outline very closely, but they do have some experience in sequencing the stories for the project in a way that introduces complexities in an interesting way. And the idea is just what I think we need: fifteen compilers in fifteen weeks.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    May 09, 2006 5:07 PM

    Different Kinds of Lazy and Dumb

    Sometime in the last month, I came across a link to an article that is a couple of years old, Why Good Programmers Are Lazy and Dumb. I like to read that kind of article every once in a while, even if I've seen the general ideas before. Usually, such an article hits me in a particular way according to my current frame of mind. Here are the ideas that stood out for me this weekend:

    • Balance. Without it, we usually go astray.

    • Lazy? "... because only lazy programmers will want to write the kind of tools that might replace them in the end."

      But you can't be so lazy that you are unwilling to write those tools, or to refactor your code, to save time in the future. You have to be forward-thinking lazy. You Aren't Gonna Need It is an admonition against doing work too soon. But sometimes, you do need it.

    • Dumb? "The less you know, the more radical will your approaches become...." You know, beginner's mind and all that.

      But you can't be so dumb that you don't have the raw material out of which to propose a radical solution. You can only think outside the box when you start with a box.

    Just as it's true that if you can't handle the right kind of pain you'll have a hard time getting better at much of anything, it's true that if you can't find the balance between the rights kind of lazy and dumb, you'll have a hard time taking your skills to the next level.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    April 22, 2006 7:07 PM

    A Day with Camouflage Scholars

    Camouflage Conference poster

    I am spending this day uncharacteristically, at the international conference Camouflage: art, science & popular culture. As I've written before, UNI is home to an internationally renown scholar of camouflage, Roy Behrens, who has organized this surprising event: a one-day conference that has attracted presenters and attendees from Europe, Australia, and all over the US, from art, architecture, photography, literature, theater, dance, music, and graphic arts, as well as a local mathematician and a local computer scientist, all to discuss "the art of hiding in plain sight". I am an outlier in every session, but I'm having fun.

    Marvin Bell, the Flannery O'Connor Professor of Letters at the University of Iowa and the state's first Poet Laureate, opened the day with some remarks and a first reading of a new poem in his Dead Man Walking series. First, he chuckled over his favorite titles from the conference program: "The Case of the Disappearing Student" and "Photographic Prevarications" among them. My talk's title, "NUMB3RS Meets The Da Vinci Code: Information Masquerading as Art", wasn't on his list, but that may be because my talk was opposite his favorite title. It is worthy of enshrinement with the all-time great MLA titles: "Art and the Blind: An Unorthodox Phallic Cultural Find". His remarks considered the ways in which poetry is like camouflage, how it uses a common vocabulary but requires a second look in order to see what is there.

    Thankfully, my talk came almost first thing in the day, as one of three talks kicking of the day's three parallel sessions. As you might guess, this was an unusual audience for me to speak to, a melange of artistic folks, with not a techie among them. What interests them about steganography in digital images?

    The talk went well. I had prepared much more than I could use in thirty minutes, but that gave me a freedom to let the talk grow naturally from plentiful raw material. I may write more about the content of my presentation later, but what is most on my mind right now are the reactions of the audience, especially in response to this comment I made near the end: "But of what interest can this be to an artist?" As I was packing up, a Los Angeles architect and artist asked about 3D steganography -- how one might hide one building inside another, either digitally or in real space. A writer asked me about hypertext and the ability to reveal different message to different readers depending on their context. Later, another artist/architect told me that what excited her most about my talk was simply knowing that something like this exists -- the idea had sparked her thoughts on military camouflage. Finally, on the way to lunch, two artists stopped me to say "Great talk! We could have listened to you for another 30 minutes, or even an hour." What a stroke to my ego.

    For me, perhaps the best part of this is to know that I am not such an oddball, that the arts are populated by kindred spirits who see inspiration in computer science much as I see it in the arts. This has been my first "public outreach" sort of talk in a long time, but the experience encourages me that we can share the thrill with everyone -- and then watch for the sparks of inspiration to create the fires of new ideas in other disciplines.

    I've done my best today to attend presentations from as many different media as possible: so far, poetry, architecture, literary translition (yes, that's an 'i'), photography, dance, painting, and neurobiology; coming up, language, music, and math. The talks have been of mixed interest and value to me, but I suppose that's not much different from most computer science conferences.

    Some thoughts that stood out or occurred to me:

    • Natural camouflage is not intentional, but rather the result of a statistical process -- natural selection.

    • Children develop a resentful attitude toward most poetry in school. They distrust it, because the meaning is hidden. Common words don't mean what they say. Such poetry makes them -- us -- feel stupid.

      Do computer science courses do this to students?

    • One poetry presentation was really a discussion of a couple of poems, including So This is Nebraska, by Ted Kooser. One member of the audience was a Chicago native who recently had moved to rural Iowa. The move had clearly devastated her; she felt lost, disoriented. The depth of her emotion poured out as she described how she did not "get" Iowa. "But Nebraska (Iowa) is so hard to see!" Over time she has begun to learn to see her new world. For her, the title of poem could be "So This is Nebraska!". She has had to learn not to resent Iowa(!)

      I think that introductory computer science courses disorient our students in a similar way. They are drowned in new language, new practices, and too often 'just programming'. How can we help them to see, "So This is Computer Science!"?

    • In a single talk this afternoon, I learned about abstract Islamic art, cell trees, and -- my personal favorite so far -- the SlingKing (tm). Two thumbs up, sez Dave.

    • "Translition" is a creativity technique described by poet Steve Kowit in which one playfully translates a poem written in a language one doesn't know. Like other creative writing techniques, the ideas is to write down whatever crosses the mind, whatever sounds right at the moment. The translitor plays off sounds and patterns, making up cognate words or any other vocabulary he wants. He is bound to preserve the structure of the original poem, its layout and typography. We did some translition during the session, and some brave audience members (not me, even being a recently self-published poet) read stanzas of their work with Norwegian and African poems.

      Okay, so I'm crazy, but how could I turn this into a programming etude?

    This is an indulgent day for me, frankly. I have a list of 500 hundred things to do for my job -- literally -- plus a hefty list for home. My daughters had soccer games, piano lessons, and babysitting today, so my wife spent a bunch of time running shuttle service solo. It's a privilege to spend an entire day, 8:00 AM-8:30 PM, on an interdisciplinary topic with little or no direct relationship to computer science. It's a good thing I don't have to worry about getting tenure.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    April 07, 2006 11:33 PM

    Back to the Basics. Accelerated

    The SIGCSE mailing list has been alive this week with a thread that started with pseudocode, moved to flowcharts, and eventually saddened a lot of readers. At the center of the thread is the age-old tension among CS educators that conflates debates between bottom-up and top-down, low-level and high-level, machine versus abstraction, and "the fundamentals" with "the latest trends". I don't mean to rehash the whole thread here, but I do want to share my favorite line in the discussion.

    Suffice to say: Someone announced an interest in introducing programming via a few weeks of working in pseudocode, which would allow students to focus on algorithms without the distraction of compilers. He asked for advice on tools and resources. A group of folks reported having had success with a similar idea, only using flowchart tools. Others reported the advantages of lightweight assembly-language style simulators. The discussion became a lovefest for the lowest-level details in CS1.

    My friend and colleague Joe Bergin, occasionally quoted here, saw where this was going. He eventually sent an impassioned and respectful message to the SIGCSE list, imploring folks to look forward and not backwards. In a message sent to a few of us who are preparing for next week's ChiliPLoP 2006 conference, he wrote what became the closing salvo in his response.

    The pseudocode thread on the SIGCSE list is incredibly depressing. ... Why not plugboards early? Why not electromechanical relays early? Why not abacus early?

    An "abacus-early" curriculum. Now, there's the fundamentals of computing! Who needs "objects first", "objects early", "procedures early", "structured programming", ...? Assignment statements and for-loops are johnny-come-latelys to the game. Code? Pshaw. Let's get back to the real basics.

    Joe, you are my hero.

    (Of course, I am being facetious. We all know that computing reached its zenith when C sprang forth as whole cloth from Bell Labs.)

    Am I too young to be an old fogey tired of the same old discussions? Am I too young to be a guy who likes to learn new things and help students do the same?

    I can say that I was happy to see that Joe's message pulled a couple of folks out of the shadows to say what really matters: that we need to share with students the awesome beauty and power of computing, to help them master this new way of thinking that is changing the world as we live. All the rest is details.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    April 06, 2006 6:46 PM

    Different Kinds of Pain

    The other night at dinner, I was telling my family about the state of my legs after my most recent run, and I said, "My legs don't hurt, but my hamstrings are sore." My younger daughter, Ellen, responded, "Um, Dad, your hamstring is part of your leg." And I was caught. Fun was had by all at my expense.

    Of course, I was talking about different kinds of pain. I've been thinking about different kinds of pain a lot lately. As I mentioned in my last post, I have been having trouble with my hamstrings. I have not suffered from a running injury in the three-plus years since I developed a larger commitment to running, but I've felt plenty of pain. Whenever we push our bodies farther than they are used to going, they tend to talk back to us in the form of muscle soreness and maybe even a little joint soreness. That pain is a natural part of the growth process, and if we can't handle that sort of pain then we can't get better -- more speed, more stamina. Oh, I suppose that we might be able to get better slowly, but so slowly that it wouldn't be any fun. Even still, we runners have to listen to their bodies and let them tell us when to lighten up. I live with this sort of pain periodically, as it is a part of the fun.

    This is a different sort of pain than the pain we feel when something is wrong with the body. Last week, my hamstrings hurt. Walking was painful at times, and going upstairs was torturous. This is the kind of pain that evolved to tell us our bodies are broken in a way that wasn't helping. Listening to this kind of pain is crucial, because unheeded the underlying cause can debilitate us. When we feel this kind of pain, we need to "get better", not get "better".

    This week I have been talking with students in my compilers class. They are feeling a kind of pain -- the pain of a large project, larger than they've ever worked on, that involves real content. If they design the parsing table incorrectly, or implement the table-driven parsing algorithm incorrectly, then their programs won't work. To their credit, they all see this sort of pain as useful, the sort of pain you feel when you are getting better. "I've learned more about Java programming and object-oriented design than I've ever learned before." They realize that, in this case, less pain would be worse, not better. Still, I feel for them, because I recall what those first few experiences with non-trivial programs felt like.

    For my agile software development readers: I know that I haven't written much about agile in a while, but I can say that many of my students are also experiencing the pain that comes from not using the agile practices that they know about. Taking small steps, using unit tests, and getting as much feedback from their code as often as possible -- all would make their lives better. There is nothing like trying to debug several hundred lines of tightly-coupled code for the first time and needing to track down why Rule 42 of 200 doesn't seem to be firing at the right time!

    This is also advising time, as students begin to register for fall courses. Sometimes, the best course for a student will be painful, because it will stretch him or her in a way that the mind is not used to. But that may be just what the student needs to get over the hump and become a top-notch computer scientist!

    These encounters with various kinds of pain remind me of an essay by Kathy Sierra from a month or so ago. One of her central points is that, to become really good at a task, you must practice the parts that you are weakest at -- you have to choose pain. Most of us prefer to practice that with which we are already comfortable, but then we don't stretch our (programming, piano-playing, golfing, running) muscles enough to grow. I suspect that it's even worse than that, that by repeatedly practicing skills we are already good at we drive our muscles into a rut that leaves us worse, not better. I see that happen in my running every so often, and it probably happens to my programming, too.

    But is all the psychic pain we feel when taking a compilers course or learning to program a good sign? Probably not. We do need to choose tasks to master for which we are well suited, that we like enough to work on at all. If you really have no affinity for abstraction and problem solving, then computer science probably isn't for you. You'll not like doing it, no matter how expert you become. But after selecting a task that you can be good at or otherwise interested in, you after to be ready to take on the growing pains that come with mastering it. Indeed, you have to seek out the things you aren't good at and whip them. (*)

    I hope you have the chance to feel the right kind of pain soon. But not for long -- be sure to move on to the fun of getting better as soon as possible.

    ~~~~~

    (*) I do offer one caveat, though. It is too easy to tell ourselves, "Oh, I don't like that" as a way to avoid finding out whether we might like something enough in practice. I don't know how many times people have said, upon hearing that I ran 20 miles that morning, "Oh, I can't run long distances" or "I don't like to run at all". I usually smile politely, but sometimes I'll let them know that I didn't know I liked it until I had done it for a while. I used to make jovial fun of my friends who ran. Then I did a little for ulterior reasons and thought, "Hmmm...", and then I did more and more. Sometimes we need to try something out for a while just to know it well enough to judge it.


    Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

    April 02, 2006 4:21 PM

    Teaching XP in CS 1 -- Back in 1990!

    A couple of days ago I was tracking down an article that had been mentioned in a thread on the SIGCSE mailing list and ran across Rich Pattis's paper, "A Philosophy and Example of CS-1 Programming Projects" (pdf), from the February 1990 issue of the ACM SIGCSE Bulletin. Having recently written about Rich's work, I couldn't resist taking it home for a weekend read. Not surprisingly, I am glad I did.

    On its face, this paper is relatively unassuming. It describes a project that he assigns to his CS 1 students as an example of how he thinks about projects. Reading it reminded me of the sort of simplicity I associate with Ward Cunningham. But I was amazed to see Rich talk about two ideas that have been discussed everywhere in CS education for the last few years.

    Section 2 is titled "Using Packages in Projects". It lays out Rich's philosophy of projects, which consists of at least two key ideas:

    • Students "are more motivated and enthusiastic about writing programs whose significance and usefulness they can plainly understand."

      Long-time Knowing and Doing readers know that this topic is often on my mind.

    • Real problems are complex and may require code and ideas that are beyond the students' current level of understanding. But "one way to simplify a programming project is to provide students with packages that contain useful operations that are beyond their ability to write."

      Anyone who has been trying to teach OOP faithfully in the first year recognizes this as a central theme.

    Then, Section 5 describes the software "methodology" that Rich taught his students, which he called Stepwise Enhancement. If you read this paper today, you'll say to yourself, wait a minute, that's XP! Consider these fragments:

    ... students first must reduce the program specifications to a minimum, concentrating on their main structural features and ignoring all the complicated details that will make the program difficult to write. Then they design, implement, and test ... a complete version of the program that meets these simplest specifications.

    The students continue repeating this process - at each stage enhancing the specifications and writing an enhanced program that meets these new specifications - until they have solved the complete problem described in the original specifications.

    At every stage they are making small additions or modifications to an already correct (for the simplified specifications) program.

    Fundamentally the stepwise-enhancement technique is useful because it is easier to design, implement, and test a series of increasingly more sophisticated complete programs than it is to attempt writing one large program that solves the original problem specifications at the outset...

    This technique also allows students to test their original ideas on how to solve the main features of the problem in a simple program first. They receive feedback, at very short intervals, that tells them whether or not they are on the correct path to a solution program. ... such feedback is critical for students who are learning in parallel the language features and how to use these features when writing programs.

    As students gain more programming experience, it will become more obvious to them what are the important structural features in specifications and what are the complicated details....

    At the end of each stage, students should have a working program that they can test on the computer to ensure that it correctly solves the problem at that stage.... If they do not finish a program, they still should have a running program that solves a simpler problem.

    I could quote more, but there is something known as "fair use". Besides, you should just go read the paper, which you can find in the ACM Digital Library. Bonus points to the reader who finds the most XP values and practices in this three quarters of a page of text! Plus, you get a sense of the practical experience Rich had gained while teaching this style of development.

    I haven't even mentioned the sample project, a simple cardioverter-defibrillator. Now that I am on deck to teach our CS1 course, I have a great place to adapt and use this project when teaching about selection and repetition. After reading this paper, I realize how much fun I will have going back to my old CS1 notes, ten years old and older, and recalling how I was teaching elementary patterns and little bits of agile methods back then. I hope that I do an even better job of teaching CS1 after my experiences from the last decade.

    Rich wrote this paper almost 17 years ago -- which should remind all of us who are trying to do new things that there isn't much that is all that new. We have a lot to learn from what folks were doing before our new ideas came along. You just have to know where to look. Considering that guys like Rich and the folks he hangs out with are usually thinking about big ideas and how they might help us improve CS education before anyone else, any CS educator would do well to keep an eye on what they were doing a few years ago. And whatever they are doing right now, well, we'll probably all be doing that in a few years.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    March 31, 2006 12:19 PM

    Getting My Groove Back

    To soothe my bruised ego, yesterday evening I did a little light blog reading. Among the articles that caught my attention was Philip Greenspun's Why I love teaching flying more than software engineering. People learning to fly want to get better; I'm guessing that most of them want to become as good as they possibly can. (Just like this guy wanted to make as good a product as possible.) Philip enjoys teaching these folks, more so than teaching students in the area of his greatest expertise, computing, because the math and computing students don't seem to care if they learn or not.

    I see students who will work day and night until they become really good software developers or really good computer scientists, and the common thread through their stories is an internal curiosity that we can't teach. But maybe we can expose them to enough cool problems and questions that one will kick their curiosity into overdrive. The ones who still have that gear will do the rest. Philip worries that most students these days "are satisfied with mediocrity, a warm cubicle, and a steady salary." I worry about this, too, but sometimes wonder if I am just imagining some idyllic world that never has existed. But computer science is so much fun for me that I'm sad that more of our students don't feel the same joy.

    While reading, I listened to Olin Shivers's talk at Startup School 2005, "A Random Walk Through Startup Space" (mp3). It had been in my audio folder for a while, and I'm glad I finally cued it up. Olin gives lots of pithy advice to the start-up students. Three quotes stood out for me yesterday:

    • At one point, Olin was talking about how you have to be courageous to start a company. He quoted Paul Dirac, who did "physics so beautiful it will bring tears to your eyes", as saying

      Scientific progress advances in units of courage, not intelligence.

    • Throughout his talk, Olin spoke about how failure is unavoidable for those who ultimately succeed.
      ... to start a business, you've got to have a high tolerance for feeling like a moron all the time.

      And how should you greet failure when you're staring it in the face?

      Thank you for what you have taught me.

    Next, I plan to listen to Steve Wozniak's much-blogged talk (mp3). If I enjoy that one as much as I enjoyed Shivers's, I may check out the rest of them.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    March 30, 2006 4:47 PM

    One of *Those* Days

    Some days, I walk out of the classroom and feel, Man, I am good. Today was not one of those days. On days like today, I walk out the classroom thinking alternately, "Did I really just do that to my students?" and "What a rotten day."

    Such days don't usually "just happen". (Well, except when my biorhythms are out of whack...) I think I can trace today's not-so-good performance to a few things:

    • I don't know all of CS well enough to teach every topic without suitable prep time. Today, I was not well enough prepared.

    • I can't count on the last moments before a session to finish preparing. Sometimes, other duties pop up at the last minute.

    • Even when I know an area well, I don't always know it in the right way. Occasionally, I bump into something that I can do reliably, but that I can't teach without hiccups.

    • I can't cut the examples out of a lecture in order to cover more material without risking that students won't follow all of my material as well. Then, when they seek clarification in an example, I'm not as prepared to help as I'd like to be.

    Fortunately for my students, I don't have too many days like today. Besides, it's good for the universe to remind me every so often that I'm not that good. Keeps me humble.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    March 10, 2006 5:37 PM

    Students Paying for Content

    Education is an admirable thing,
    but it is well to remember from time to time
    that nothing worth knowing can be taught.
    -- Oscar Wilde

    Recently I have been having an ongoing conversation with one of my colleagues, a senior faculty member, about teaching methods. This conversation is part of a larger discussion of the quality of our programs and the attractiveness of our majors to students.

    In one episode, we were discussing the frequency with which one quizzes and exams the class. Some of our math professors quiz almost daily, and even some CS professors give substantial quizzes every week. My colleague thinks is a waste of valuable class time and a disservice to students. I tend to agree, at least for most of our CS courses. When we assess students so frequently for the purposes of grading, the students become focused on assessment and not on the course material. They also tend not to think much about the fun they could be having writing programs and understanding new ideas. They are too worried about the next quiz.

    My colleague made a much stronger statement:

    Students are paying for content.

    In an interesting coincidence, when he said this I was preparing a class session in which my students would do several exercises that would culminate in a table-driven parser for a small language. We had studied the essential content over the previous two weeks: first and follow sets, LL(1) grammars, semantic actions, and so on.

    I don't think I was wasting my students time or tuition money. I do owe them content about compilers and how to build them. But they have to learn how to build a compiler, and they can't learn that by listening to me drone on about it at the front of the classroom; they have to do it.

    My colleague agrees with me on this point, though I don't think he ever teaches in the way I do. He prefers to use programming projects as the only avenue for practice. Where I diverge is in trying to help students gain experience doing in a tightly controlled environment where I can give almost immediate feedback. My hope is that this sort of scaffolded experience will help them learn and internalize technique more readily.

    (And don't worry that my students lack for practical project experience. Just ask my compiler students, who had to submit a full parser for a variant of Wirth's Oberon-0 language at 4 PM today.)

    I think that our students are paying for more than just content. If all they need is "dead" content, I can give them a book. Lecture made a lot of sense as the primary mode of instruction back when books were rare or unavailable. But we can do better now. We can give students access to data in a lot of forms, but as expert practitioners we can help them learn how to do things by working with them in the process of doing things.

    I am sympathetic to my colleague's claims, though. Many folks these days spend far too much time worrying about teaching methodology than about the course material. The content of the course is paramount; how we teach it is done in service of helping students learn the material. But we can't fall into the trap of thinking that we can lecture content and magically put it into our students' heads, or that they can magically put it there by doing homework.

    This conversation reminded me of a post on learning styles at Tall, Dark, and Mysterious. Here is an excerpt she quotes from a cognitive scientist:

    What cognitive science has taught us is that children do differ in their abilities with different modalities, but teaching the child in his best modality doesn't affect his educational achievement. What does matter is whether the child is taught in the content's best modality. All students learn more when content drives the choice of modality.

    The issue isn't that teaching a subject, say, kinesthetically, doesn't help a kinesthetic learner understand the material better; the issue is that teaching material kinesthetically may compromise the content.

    Knowledge of how to do something sometimes requires an approach different from lecture. Studio work, apprenticeship, and other forms of coached exercise may be the best way to teach some material.

    Finally, that post quotes someone who sees the key point:

    Perhaps it's more important for a student to know their learning style than for a teacher to teach to it. Then the student can make whatever adjustments are needed in their classroom and study habits (as well as out of classroom time with the instructor).

    In any case, a scientist or a programmer needs to possess both a lot of declarative knowledge and a lot of procedural knowledge. We should use teaching methods that best help them learn.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    March 07, 2006 6:44 PM

    A Blog Entry From Before I Had a Blog

    Back when I started this blog, I commented that "the hard drive of every computer I've ever had is littered with little snippets and observations that I would have liked to make available to someone, anyone, at the time." At the time, I thought that I might draw on some of my old writings for my blog. That hasn't happened, because I've usually either had something new to write about that interested or had no time to blog at all.

    Apparently, not all of my old writings ended up in dead files somewhere. While I was getting ready for my compiler class today, I looked over the lecture notes from the corresponding session the last time I taught this course. The date was November 4, 2003, and I had just returned from OOPSLA in Anaheim. I was excited about the conference, I'm sure, as I always am, and used part of my first class back to share some of what I had learned at the conference. (I wouldn't be surprised if part of my motivation in sharing was avoid diving head-first back into semantic analysis right away... :-)

    My lecture notes for that day looked a lot like a set of blog entries! So it seems that my students were unwitting, though not unwilling, audience for my blogging before this blog began.

    I enjoyed re-reading one of these OOPSLA'03 reports enough that I thought I'd polish it up for a wider audience. Here is an excerpt from my compiler lecture notes of 11/04/03, on David Ungar's keynote address, titled "Seven Paradoxes of Object-Oriented Programming Languages". I hope you enjoy it, too.

    OOPSLA 1: David Ungar's Keynote Address

    David Ungar

    David Ungar of Sun Microsystems gave an invited talk at OOPSLA last week, entitled "Seven Paradoxes of Object-Oriented Programming Languages". ( local mirror)

    His abstract lists seven paradoxes about language that he thinks make designing a good programming language so difficult. He titled and focused his talk on object-oriented languages, given his particular audience, but everything he said applies to languages more generally, and indeed to most design for human use. His points are about language design, not compiler design, but language design has a profound effect on the implementation of a compiler. We've seen this at a relatively low level when considering LL and LR grammars, and it applies at higher levels, too.

    He only addressed three of his paradoxes in the talk, and then in a more accessible form.

    1. Making a language better for computers makes it worse for people, and vice versa.

    He opened with a story about his interview at MIT when he was looking for his first academic position. One interviewer asked him to define 'virtual machine'. He answered, "a way to give programmers good error messages, so that when there weren't any error messages the program would run successfully". The interviewer said, "If you believe that, then we have nothing more to talk about it." And they didn't. (He got the offer, but turned it down.)

    A programming language has to be mechanistic and humanistic, but in practice we let the mechanistic dominate. Consider: write a Java class that defines a simple stack of integers. You'll have to say int five times -- but only if you want 32-bit integers; if you want more or less, you need to say something else.

    The machines have won! How can we begin to design languages for people? We first must understand how they think. Some ideas that he shared from non-CS research:

    • Abstractions are in your head, not in the world. A class of objects is an abstraction. An abstract data type is an abstraction. An integer is an abstraction. The idea of identity is an abstraction.

      Our programming languages must reflect a consciousness of abstraction for even the simplest ideas; in C++ and Java, these include const, final, and ==. (A final field in Java cannot be modified, but its value can change if it is a mutable object...)

    • From the moment of perception, our bodies begin to guess at abstractions. But these abstractions are much 'softer' than their equivalents in a program.

    • Classical categories -- classes of objects -- are defined in terms of shared features. This implies a symmetry to similarity and representativeness. But that is not how the human mind seems to work. Example: birds, robins, and ostriches. For humans, classes are not defined objectively in our minds but subjectively, with fuzzy notions of membership. One interesting empirical observation: People treat the middle of a classification hierarchy as the most basic unit, not the root or leaves. Examples: cat, tree, linked list.

    • Why is state so important to us in our programs? Because the 'container' metaphor seems deeply ingrained in how we think about the world. "The variable x 'holds' a 4", like a box. The same metaphor affects how we think about block structure and single inheritance. But the metaphor isn't true in the world; it's just a conceptual framework that we construct to help us understand. And it can affect how we think about programs negatively (especially for beginners!?)

    If class-based languages pose such problems, what is the alternative? How about an instance-based language? One example is Self, a language designed by Ungar and his colleagues at Sun. But instance-based languages pose their own problems...

    2. The more concepts we add to a language, the less general code that we write.

    He opened with a story about his first big programming assignment in college, to write an assembler. He knew APL best -- a simple language with few concepts but powerful operators. He wrote a solution in approximately 20 lines of code. How? By reading in the source as a string and then interpreting it.

    Some of his PL/1-using classmates didn't get done. Some did, but they wrote hundreds and hundreds of lines of code. Why? They could have done the same thing he did in APL -- but they didn't think of it!

    But why? Because in languages like Fortran, PL/1, C, Java, etc., programs and data are different sorts of things. In languages like APL, Lisp, Smalltalk, etc., there is no such distinction.

    Adding more concepts to a language -- such as distinguishing programs from data -- impoverish discourse because they blind us, create a mindset that is focused on concepts, not problems.

    Most OOPs adopt a classes-and-bits view of objects, which encourages peeking at implementation (getters/setters, public/private, ...). Could create a better language that doesn't distinguish between data and behavior? Self also experiments with this idea, as do Smalltalk and Eiffel.

    Adding a feature to a language solves a specific problem but degrades learnability, readability, debuggability -- choosing what to say. (Why then do simpler languages like Scheme not catch on? Maybe it's not a technical issue but a social one.)

    What is an alternative? Build languages with layering -- e.g., Smalltalk control structures are built on top of blocks and messages.

    3. Improving the type system of a language makes the type system worse.

    This part of Ungar's talk was more controversial, but it's a natural application of his other ideas. His claim: less static type checking implies more frequent program change, implies more refactoring, implies better designs.

    [-----]

    So what? We can use these ideas to understand our languages and the ways we program in them. Consider Java and some of the advice in Joshua Bloch's excellent book Effective Java.

    • "Use a factory method, not a constructor."

      What went wrong? Lack of consciousness of abstraction. Richer is poorer (constructor distinctive from message send).

    • "Override hashCode() when you override equals()."

      What went wrong? Better types are worse. Why doesn't the type system check this?

    Ambiguity communicates powerfully. We just have to find a way to make our machines handle ambiguity effectively.

    [end of excerpt]

    As with any of my conference reports, the ideas presented belong to the presenter unless stated otherwise, but any mistakes are mine.

    The rest of this lecture was a lot like my other conference-themed blog entries. Last week, I blogged about the Python buzz at SIGCSE; back in late 2003 I commented on the buzz surrounding the relatively new IDE named Eclipse. And, like many of my conference visits, I came back with a long list of books to read, including "Women, Fire, and Dangerous Things", by George Lakoff, "The Innovator's Dilemma", by Clayton Christensen, and "S, M, L, XL", by Rem Koolhaas and Bruce Mau.

    Some things never change...


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    March 05, 2006 10:23 AM

    SIGCSE Wrap-Up: This and That

    I was able to stay at SIGCSE only through the plenary talk Friday morning, and so there won't be a SIGCSE Day 2 entry. But I can summarize a few ideas that came up over the rest of Day 1 and the bit of Day 2 I saw. Consider this "SIGCSE Days 1 and 1.15".

    Broadening the Reach of CS

    I went to a birds-of-a-feather session called "Innovative Approaches to Broadening Computer Science", organized by Owen Astrachan and Jeffrey Forbes of Duke. I was surprised by the number and variety of schools that are creating new courses aimed at bringing computing to non-computer scientists. Many schools are just beginning the process, but we heard about computing courses designed specially for arts majors, psychology majors, and the more expected science and engineering majors. Bioinformatics is a popular specialty course. We have an undergraduate program in bioinformatics, but the courses students take at the beginning of this program are currently traditional programming courses. My department recently began discussions of how to diversify the entry paths into computing here, with both majors and non-majors in mind. It's encouraging to see so many other schools generating ideas along these lines, too. We'll all be able to learn from one another,

    Extreme Grading

    Not quite, but close. Henry Walker and David Levine presented a paper titled "XP Practices Applied to Grading". David characterized this paper as an essay in the sense resurrected by the essays track at OOPSLA. I enjoy SIGCSE talks that reflect on practices. While there wasn't much new for me in this talk, it reminded Jim Caristi and me of a 1998 OOPSLA workshop that Robert Biddle, Rick Mercer, and I organized called Evaluating Object-Oriented Design. That session was one of those wonderful workshops where things seem to click all day. Every participant contributed something of value, and the contributions seemed to build on one another to make something more valuable. I presented one of my favorite patterns-related teaching pieces, Using a Pattern Language to Evaluate Design. What Jim remembered most vividly from the workshop was the importance in the classroom of short cycles and continuous feedback. It was good to see CS educators like Henry and Dave presenting XP practices in the classroom to the broader SIGCSE community.

    ACM Java Task Force

    Over the last two-plus years, the ACM Java Task Force has put in a lot of time and hard work designing a set of packages for use in teaching Java in CS1. I wonder what the ultimate effect will be. Some folks are concerned about the graphics model that the task force adopted. ("Back to Java 1.0!" one person grumbled.) But I'm thinking more of the fact that Java may not last as the dominant CS1 language much longer. At last year's SIGCSE one could sense a ripple of unease with Java, and this year the mood seemed much more "when...", not "if...". Rich Pattis mentioned in his keynote lecture that he had begun teaching a new CS1 language every five years or so, and Java's time should soon be up. He didn't offer a successor, but my read of the buzz at SIGCSE is that Python is on the rise.

    Computer Science in K-12 Education

    The second day plenary address was given by a couple of folks at the Computer Science Teachers Association, a group affiliated with the ACM that "supports and promotes the teaching of computer science and other computing disciplines" in the K-12 school system. I don't have much to say about their talk other than to note that there a couple of different issues at play. One is the teaching of computer science, especially AP CS, at the pre-university level. Do we need it? If so, how do we convince schools and taxpayers to do it right? The second is more general, the creation of an environment in which students want to study math, science, and technology. Those are the students who are in the "pipeline" of potential CS majors when they get to college. At first glance, these may seem like two separate issues, but they interconnect in complicated ways when you step into the modern-day high school. I'm glad that someone is working on these issues full-time, but no one should expect easy answers.

    ...

    In the end, Rich Pattis's talk was the unchallenged highlight of the conference for me. For all its activity and relatively large attendance (1100 or so folks), the rest of the conference seemed a bit low on energy. What's up? Is the discipline in the doldrums, waiting for something new to invigorate everyone? Or was it just I who felt that way?


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    March 05, 2006 10:06 AM

    SIGCSE Buzz: Python Rising?

    If the buzz is accurate, Python may be a successor. The conference program included a panel on experiences teaching Python to novices, including John Zelle and Mark Guzdial, who have written the two texts currently available. My publisher friend Jim Leisy of Franklin, Beedle was very high on Zelle's book and the adoptions they've picked up in the last few months. The range of schools now using Python in at least one track of their intro courses ranges from small private schools all the way to MIT.

    Heck, I got home from SIGCSE and "PYTHON" even showed up as a solution in the local newspaper's Saturday Jumble!

    All in all, I think I prefer Ruby, both for me and for beginners, but it is behind Python in the CS education life cycle. In particular, there is only one introductory level book available right now, Chris Pine's Learn To Program. Andy Hunt from the Pragmatic Bookshelf sent me a review copy, and it looks good. It's not from the traditional CS1 textbook mold, though, and will face an uphill battle earning broad attention for CS1 courses.

    In any case, I welcome the return to a small, simple, language for teaching beginners to program. Whether Ruby or Python, we would be using an interpreted language that is of practical value as a scripting language. This has great value for three audiences of student: non-majors can learn a little about computing while learning scripting skills that they can take to their major discipline; folks who intend to major in CS but change their minds can also leave the course with useful skills; and even majors will develop skills that are valuable in upper-division courses. (You gotta figure that they'll want to try out Ruby on Rails at some point in the next few years.)

    Scripting languages pose their own problems, both in terms of language and curriculum. In particular, you need to introduce at least one and maybe two systems languages such as Java, C, or C++ in later courses, before they are needed for large projects. But I think the trade-off will be a favorable one. Students can learn to program in an engaging and relatively undistracting context before moving on to bigger issues. Then again, I've always favored languages like Smalltalk and Scheme, so I may not be the best bellwether of this trend.

    Anyway, I left SIGCSE surprised to have encountered Python at so many turns. Maybe Java will hang on as the dominant CS1 language for a few more years. Maybe Python will supplant it. Or maybe Python will just be a rebound language that fills the void until the real successor comes along. But for now the buzz is notable.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    March 02, 2006 6:36 PM

    SIGCSE Day 1: Keynote Talk by Rich Pattis

    I've been lucky in my years as a computer science professor to get to know some great teachers. Some folks embody the idea of the teacher-scholar. They are able computer scientists -- in some cases, much more than that -- but they have devoted their careers to computer science education. They care deeply about students and how they learn. My good fortune to have met many of these folks and learned from them is dwarfed by the greater fortune to have become acquaintances and, in some cases, their friends.

    Rich Pattis

    One of my favorite computer science educators, Rich Pattis, is receiving the SIGCSE Award Winner for Outstanding Contributions for CS Education at the 2006 conference -- this morning, as I type. I can't think of many people who deserve this award as much as Rich. His seminal contribution to the CS education is Karel The Robot, a microworld for learning basic programming concepts, and the book Karel the Robot: A Gentle Introduction to The Art of Programming, which has been updated over the years and extended to C++ and Java. But Rich's contributions have continued steadily since then, and he has been in the vanguard of all the major developments in CS education since. Check him out in the ACM Digital Library.

    Rich titled his acceptance talk "Can't Sing, Can't Act, Can Dance a Little (On Choosing the Right Dance Partners)", a reference to a famous remark about Fred Astaire and how Ginger Rogers "gave him class". The talk is a trace of Rich's computing history through the people who helped him become the teacher he is, from high school until today. Rich found photos of most of these folks, famous and obscure, and told a series of anecdotes to illustrate his journey.

    Those who know Rich wouldn't be surprised that he began his talk with a list of books that have influenced him. Rich reads voraciously on all topics that even tangentially relate to computing and technology in the world, and he loves to share the book ideas with other folks.

    Rich said, "I get all of my best ideas these days from Owen Astrachan", and one particular idea is Owen's practice of giving book awards to students in his courses. Owen doesn't select winners based on grades but on who has contributed the most to the class. Rich decided to reinstate this idea here at SIGCSE by arranging for all conference attendees to receive his all-time favorite book, Programmers at Work by Susan Lammers. I've not read this book but have long wanted to read it, but it's been out of print. (Thanks, Rich, and Susan, and Jane Prey and Microsoft, for the giveaway!)

    He also recommended Out of their Minds, by Dennis Shasha, which has long been on my to-read list. It just received a promotion.

    Throughout the talk, Rich came back to Karel repeatedly, just as his career has crossed paths with it many time. Rich wrote Karel instead of writing his dissertation, under John Hennessy, now president of Stanford. Karel was third book published in TeX. (We all know the first.) Karel has been used a lot at Stanford over the years, and Rich demoed a couple of projects written by students that had won course programming contests there, including a "17 Queens" puzzle and a robot that launched fireworks. Finally, Rich showed a photo of a t-shirt given him by Eric Roberts, which had a cartoon picture with the caption "Two wrongs don't make a right, but three lefts do." If you know Karel, you'll get the joke.

    The anecdotes flew fast in the talk, so I wasn't able to record them all. A few stuck with me.

    Rich told about one of the lessons he remembers from high school. He went to class to take a test, but his mechanical pencil broke early in the period. He asked Mr. Lill, his teacher, if he could borrow a pencil. Mr. Lill said 'no'. May I borrow a pencil from another student? Again, 'no'. "Mr. Pattis, you need to come to class prepared." This reminded me of Dr. Brown, my assembly language and software engineering prof in college, who did not accept late work. Period. The computer lab lost power, so you couldn't run your card deck? The university's only mainframe printer broke? "Not my problem," he'd say. The world goes on, and you need to work with the assumption that computers will occasionally fail you. I never hated Dr. Brown, even for a short while, as Rich said he did Mr. Lill for a while. But I fully understood Rich when he finished this anecdote with the adage, "Learning not accompanied by pain will be forgotten."

    Rich praised Computer Science, Logo Style, a three-book series by Brian Harvey as the best introduction to programming ever written. Wow.

    Not surprisingly, some of the Rich's best anecdotes related to students. He likes to ask an exam question on the fact that there are many different infinities. (An infinite number?) Once, he asked students, "Why are their more mathematical functions than computer programs?" One student answered, "Because mathematicians work harder than computer scientists." (Get to work, folks...)

    My favorite of Rich's student anecdotes was a comment a student wrote on a course evaluation form. The comment was labeled A Relevant Allegory:

    In order to teach someone to boil water, he would first spend a day giving the history of pots, including size, shape, and what metals work best. The next day he'd lecture on the chemical properties of water. On day three, he'd talk about boiled water through the ages. That night, he'd tell people to go home and use boiled water to make spaghetti carbonera. But never once would he tell you to put the water in the pot and heat it.

    That's what his programming classes are like -- completely irrelevant to the task at hand."

    Rather than summarize Rich's comments, I'll quote him, too, from a course syllabus in which he quoted the student:

    I like this comment because it is witty, well-written, and true -- although maybe not in the extreme that the author states. Teaching students to boil water is great for a high school class, but in college we are trying to achieve something deeper...

    I acknowledge that learning from first principles is tougher than memorization, and that sometimes students feel that the material covered is not "applied".

    Eventually, Rich's dance-partner history reached the "SIGSCE years". He showed two slides of pictures. The first showed a first generation of folks from SIGCSE who have become a long-term cadre that shares ideas about computer science, teaching, books, and life. The second showed later influences on Rich from among the usuals at SIGCSE. I was a bit surprised and highly honored to see my own picture up on Slide 1! I recall first meeting Rich at SIGCSE back in 1994 or so, when we began a long dialogue on teaching OOP and C++ in CS1. I was pretty honored even then that he engaged me in this serious conversation, and impressed by the breadth of the ideas he had collected and cultivated.

    Rich ended his talk with a tribute to one of his favorite films, The Paper Chase. Long-time readers of Knowing and Doing may recall that I wrote a blog entry that played off my own love for this movie (and Dr. Brown!). Rich said that this movie has "more truths per scene" about teaching than any other movie he knows. As much as he loves "The Paper Chase", Rich admitted to feeling like a split personality, torn between the top-down intellectual tour déforce of Professor Kingsfield and the romantic, passionate, bottom-up, "beat" style of Dead Poets Society's John Keating.

    Dead Poets Society

    Kingsfield and Keating occupy opposite ends of the teaching spectrum, yet both inspire a tremendous commitment to learning in their students. Like many of us who teach, Rich resonates with both of these personalities. Also like many of us, he knows that it's hard to be both. He likes to watch "Dead Poets Society" each year as a way to remind him of how his students must feel as they move on to the wide open wonder of the university. Yes, we know that "Dead Poets Society" is about high school. But, hey, "The Paper Chase" is about law school. You should watch both, if you haven't already.

    Rich closed with a video clip from "The Paper Chase", a famous scene in which Professor Kingsfield introduces his class to the Socratic method. (The clip is 10.4 Mb, and even still not of the highest quality.)

    This was an inspirational close to an inspirational talk, from a CS educator's CS educator, a guy who has been asking questions and trying to get better as a teacher for over twenty years -- learning from his dance partners and sharing what he has created. A great way to start a SIGCSE.

    Congratulations, Rich.

    (UPDATE March 4: I have posted a link to the video clip from "The Paper Chase" that Rich showed. He has said that he will post his presentation slides to the web; I'll watch for them, too.)


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    February 28, 2006 11:19 PM

    DNA, Ideas, and the CS Curriculum

    Today is the anniversary of Watson and Crick's piecing together the three-dimensional structure of DNA, the famed double helix. As with many great discoveries, Watson and Crick were not working in isolation. Many folks were working on this problem, racing to be the first to "unlock the secret of life". And Watson and Crick's discovery itself depended crucially on data collected by chemist Rosalind Franklin, who died before the importance of her contribution became widely known outside of the inner circle of scientists working on the problem.

    Ironically, this is also the birthday of one of the men against whom Watson and Crick were racing: Linus Pauling. Pauling is the author of one of my favorite quotes:

    The best way to have a good idea is to have a lot of ideas...

    Perhaps the great geniuses have only good ideas, but most of us have to work harder. If we can free ourselves to Think Different, we may actually come across the good ideas we need to make progress.

    Of course, you can't stop at generating ideas. You then have to examine your candidates critically, exposing them to the light of theory and known facts. Whereas one's inner critic is an enemy during idea generation, it is an essential player during the sifting phase.

    Pauling knew this, too. The oft-forgotten second half of Pauling's quote is:

    ... and throw the bad ones away.

    Artists work this way, and so do scientists.

    This isn't a "round" anniversary of Watson and Crick's discovery; they found the double helix in 1953. It's not even a round anniversary of Pauling's birth, as he would be 105 today. (Well, that's sort of round.) But I heard about the anniversaries on the radio this morning, and the story grabbed my attention. Coincidentally, DNA has been on my mind for a couple of reasons lately. First, my last blog entry talked about a paper by Bernard Chazelle that uses DNA as an example of duality, one of the big ideas that computer science has helped us to understand. Then, on the plane today, I read a paper by a group of folks at Duke University, including my friend Owen Astrachan, on an attempt to broaden interest in computing, especially among women.

    Most research shows that women become interested in computing when they see how it can be used to solve real problems in the world. The Duke folks are exploring how to use the science of networks as a thematic motivation for computing, but another possible domain of application is bioinformatics. Women who major in science and technology are far more likely to major in biology than in any other discipline. Showing the fundamental role that computing plays in the modern biosciences might be a way to give women students a chance to get excited about our discipline, before we misdirect them into thinking that computerScience.equals( programming ).

    My department launched a new undergraduate major in bioinformatics last fall. So we have a mechanism for using the connection between biology and computing to demonstrate computing's utility. Unfortunately, we have made a mistake so far in the structure of our program: all students start by taking two semesters of traditional programming courses before they see any bioinformatics! I think we need to do some work to our first courses. Perhaps Astrachan and his crew can teach us something.

    I'm in Houston for SIGCSE this week, and the Duke paper will be presented here on Saturday. Sadly, I have to leave town on Friday... If I want to learn more about the initiative than I can learn just from the paper, I will need to take advantage of my own social network to make a connection.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    February 24, 2006 2:30 PM

    iPods and Big Ideas

    Last summer I blogged on a CACM article called "The Thrill Is Gone?", by Sanjeev Arora and Bernard Chazelle, which suggested that "we in computing have done ourselves and the world a disservice in failing to communicate effectively the thrill of computer science -- not technology -- to the general public." Apparently, Chazelle is carrying the flag for effective communication into battle by trying to spread the word beyond traditional CS audiences. The theoryCS guys -- Suresh, Lance, and Ernie among them -- have all commented on his latest public works: an interview he gave prior to giving a talk called "Why Computer Science Theory Matters?" at the recent AAAS annual meeting, the talk itself, and a nice little article to appear in an upcoming issue of Math Horizons. Be sure to read the Horizons piece; the buzz is well-deserved.

    Chazelle ties to convince his audience that computing is the nexus of three Big Ideas:

    • universality - the idea that any digital computer can, in principle, do what any other does. Your iPod can grow up to be anything any other computer can be.

    • duality - the idea that data and program are in principle interchangeable, that perspective determines whether something is data or program.

    • self-reference - the idea that a program can refer to itself, or data that looks just like it. This, along with the related concept of self-replication, ties the intellectual ground of computing to that of biology.

    ... and the source of two new, exceedingly powerful ideas that cement the importance of computing to the residents of the 21st century: tractability and algorithm.

    Chazelle has a nice way of explaining tractability to a non-technical audience, in terms of the time it takes to answer questions. We have identified classes of questions characterized by their "time signatures", or more generally, their consumption of any resource we care about. This is a Big Idea, too:

    Just as modern physics shattered the platonic view of a reality amenable to noninvasive observation, tractability clobbers classical notions of knowledge, trust, persuasion, and belief. No less.

    Chazelle's examples, including e-commerce and nuclear non-proliferation policy, are accessible to any educated person.

    The algorithm is the "human side" of the program, an abstract description of a process. The whole world is defined by processes, which means that in the largest sense computer science gives us tools for studying just about everything that interests us. Some take the extreme view that all science is computer science now. That may be extreme, but in one way it isn't extreme enough! Computer science doesn't revolutionize how we study only science, but also the social sciences and literature and art. I think that the greatest untapped reservoir of CS's influence lies in the realm of economics and political science.

    Chazelle makes his case that CS ultimately will supplant mathematics as the primary vehicle for writing down our science:

    Physics, astronomy, and chemistry are all sciences of formulae. Chaos theory moved the algorithmic zinger to center stage. The quantitative sciences of the 21st century (e.g., genomics, neurobiology) will complete the dethronement of the formula by placing the algorithm at the core of their modus operandi.

    This view is why I started my live as a computer scientist by studying AI: it offered me the widest vista on the idea of modeling the world in programs.

    I will be teaching CS1 this fall for the first time in ten years or so. I am always excited at the prospect of a new course and kind of audience, but I'm especially excited at the prospect of working with freshmen who are beginning our major -- or who might, or who might not but will take a little bit of computing with them off to their other majors. Learning to program (perhaps in Ruby or Python?) is still essential to that course, but I also want my students to see the beauty and importance of CS. If my students can leave CS1 next December with an appreciation of the ideas that Chazelle describes, and the role computing plays in understanding them and bringing them to the rest of the world, then I will have succeeded in some small measure.

    Of course, that isn't enough. We need to take these ideas to the rest of our students, especially those in the sciences -- and to their faculty!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    February 20, 2006 6:48 PM

    Changing How People Think

    Pascal Van Cauwenberghe writes a bit about agile development, lean production, and other views of software engineering. He recently quoted the Toyota Way Fieldbook as inspiration for how to introduce lean manufacturing as change. I think that educators can learn from Pascal's excerpt, too.

    ... we're more likely to change what people think by changing what they do, rather than changing what people do by changing what they think.

    I can teach students about object-oriented programming, functional programming, or agile software development. I can teach vocabulary, definitions, and even practices and methodologies. But this content does not change learners "deeply held values and assumptions". When they get back into the trenches, under the pressure of new problems and time, old habits of thought take over. No one should be surprised that this is true for people who are not looking to change, and that is most people. But even when programmers want to practice the new skill, their old habits kick in with regularity and unconscious force.

    The Toyota Way folks use this truth as motivation to "remake the structure and processes of organizations", with changes in thought becoming a result, not a cause. This can work in a software development firm, and maybe across a CS department's curriculum, but within a single classroom this truth tells us something more: how to orient our instruction. As an old pragmatist, I believe that knowledge is habit of thought, and that the best way to create new knowledge is to create new habits. This means that we need to construct learning environments in which people change what they do in practical, repeatable ways. Once students develop habits of practice, they have at their disposal experiences that enable them to think differently about problems. The ideas are no longer abstract and demanded by an outside agent; they are real, grounded in concrete experiences. People are more open to change when it is driven from within than from without, so this model increases the chance that the learner entertain seriously the new ideas that we would like them to learn.

    In my experience, folks who try XP practices -- writing and running tests all the time, refactoring, sharing code, all supported by pair programming and a shared culture of tools and communication -- are more likely to "get" the agile methods than are folks to whom the wonderfulness of agile methods is explained. In the end, I think that this is true for nearly all learning.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    February 16, 2006 4:18 PM

    Death by Risk Aversion, University Edition

    Death by risk aversion, courtesy of Creating Passionate Users

    In When Committees Suck the Life Out of Great Ideas, Jeremy Zawodny wrote:

    A few days ago, I saw a nice graphic on the Creating Passionate Users blog which was intended to illustrate Death by risk-aversion: [at right]

    I contend that you can make a very similar graphic to illustrate what happens when too many people get involved in designing a product.

    I saw that blog entry, too, but my first thought wasn't of software products and committees. It was of academic curriculum. I have mentioned this post on academic conservatism so often that I probably sound like a broken record, but real change in how we teach computer science courses is hard to effect at most schools. This difficulty stems in large part from the "fear occurs here" syndrome that Kathy Sierra identifies and Jeremy Zawodny recognizes in product development. Faculties are understandably reluctant to change what they've always done, especially if in their own minds things are going pretty well. Unfortunately, that comfort often results from a lack of circumspection... Maybe things aren't going as well as they always have, and we might see that if we'd only pay more attention to the signals our students are sending. Maybe things are going fine, but the world around us has changed and so we are solving a problem that no longer exists while the real problem sits ingloriously at the back of the room.

    The result of the "fear occurs here" syndrome is that we keep doing the same old, same old, while opportunities to get better drift by -- and, sometimes, while some competitor sneaks in and eats our lunch. The world rarely stays the same for very long.

    There are always folks who push the boundaries, trying new things, working them out, and sharing the results with the rest of us. The annual SIGCSE conference and Educators Symposium at OOPSLA are places I can reliably learn from teachers who are trying new ideas. And there are many... Stephen Edwards on testing early in the curriculum, Mark Guzdial on multimedia programming in CS1, Owen Astrachan on just about anything. I would love to see my own department consider Mark's media computation approach in CS1. Short of that, I plan to consider something like Owen's science-of-networks approach for next fall. (You can see the SIGCSE 2006 paper on the latter via the conference's on-line program. Go to Saturday, 8:30 AM, and follow the "Recruitment and Retention" link.) Indeed, I am looking forward to SIGCSE in a couple of weeks.

    But I've also had ChiliPLoP'06 on my mind, too. I am co-chairing ChiliPLoP again this year, and we have two relevant hot topics on topic: a reprise of last year's elementary patterns project and Dave West and Pam Rostal expanding on their OOPSLA Educators Symposium presentation to a software development curriculum using pedagogical and apprenticeship patterns. But these ideas are emblematic of how hard it is to effect real change in curriculum: it is hard to do a complete job, at least complete enough to attract a large body of adopters, and some ideas are simply so unlike anything that people currently do as to be unthinkable by the vast majority of practitioners. We know that revolutionary change can happen, with the right group of people leading people, working hard and having a little luck along the way. But such revolutions are a low-probability proposition.

    [As an aside... After reading this article on productivity patterns and "life hacks" over at 43 Folders, I boldly sent site guru Merlin Mann a flyer of an invitation to consider coming to ChiliPLoP some time to work on a productivity pattern language with his GTD buddies and a few patterns people. Productivity patterns don't fit the narrow definition of Pattern Languages of Programs, but ChiliPLoP has always been about pattern languages more broadly. Besides, I'd love to swap Mac hacks with a few more fellow junkies.]


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    February 16, 2006 3:23 PM

    Eat *That* Dog Food

    Eric Sink tells one of the best stories ever to illustrate the idea of eating your own dog food. Go read the whole paper and story, but I can set up the punchline pretty quickly: Table saws are powerful, dangerous tools. Many woodworkers lose fingers every year using table saws. But...

    A guy named Stephen Gass has come up with an amazing solution to this problem. He is a woodworker, but he also has a PhD in physics. His technology is called Sawstop. It consists of two basic inventions:
    • He has a sensor which can detect the difference in capacitance between a finger and a piece of wood.
    • He has a way to stop a spinning table saw blade within 1/100 of a second, less than a quarter turn of rotation.

    The videos of this product are amazing. Slide a piece of wood into the spinning blade, and it cuts the board just like it should. Slide a hot dog into the spinning blade, and it stops instantly, leaving the frankfurter with nothing more than a nick.

    Here's the spooky part: Stephen Gass tested his product on his own finger! This is a guy who really wanted to close the distance between him and his customers.

    Kinda takes the swagger out of your step for using your own blogging tool.

    Eric's paper is really about software developers and their distance from users. His title, Yours, Mine and Ours, identifies three relationships developers can have with the software they write vis-á-vis the other users of the product. Many of his best points come in the section on UsWare, which is software intended for use by both end users and the developers themselves. Eric is well-positioned to comment on this class of programs, as his company develops a version control tool used by his own developers.

    It's easy for developers to forget that they are not like other users. I know this danger well; as a university faculty need to remind myself daily that I am not like my students, either in profile or in my daily engagement with the course material.

    I like his final paragraph, which summarizes his only advice for solving the ThemWare/UsWare problems:

    Your users have things to say. Stop telling them how great your software is and listen to them tell you how to make it better.

    We all have to remind ourselves of this every once in a while. Sadly, some folks never seem to. Many faculty assume that they have nothing to learn from what their students are saying, but that is almost always because they aren't really listening. Many universities spend so much time telling students why they should come there that they don't have the time or inclination to listen to students say what would make come.

    I also learned an interesting factoid about State Farm Insurance, the corporate headquarters for which are located down I-74 from Eric's home base of Urbana, Illinois. State Farm is also a major corporate partner of the IT-related departments at my university, including the CS department. They work hard to recruit our students, and they've been working hard to help us with resources when possible. The factoid: State Farm is Microsoft's largest non-government customer. [In my best Johnny Carson imitation:] I did not know that. As a result of this fact, Microsoft has an office in the unlikely location of Bloomington, Illinois.

    Despite an obvious interest in hiring folks with experience using Microsoft tools, State Farm has never pressured us to teach .NET or C# or VisualStudio or any particular technology. I'm sure they would be happy if we addressed their more immediate needs, but I am glad to know that they have left decisions about curriculum to us.

    That said, we are beginning to hear buzz from other insurance companies and banks, most located in Des Moines, about the need for student exposure to .NET. We probably need to find a way to give our students an opportunity to get experience here beyond VB.NET and Office. Where is that link to Mono...


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    February 13, 2006 6:09 PM

    Doing What You Love, University Edition

    Several folks have commented already on Paul Graham's How to Do What You Love. As always, this essay is chock full of quotable quotes. Ernie's 3D Pancakes highlights one paragraph on pain and graduate school. My grad school experience was mostly enjoyable, but I know for certain that I was doing part of it wrong.

    Again as always, Graham expresses a couple of very good ideas in an easy-going style that sometimes makes the ideas seem easier to live than they are. I am thinking about how I as a parent can help my daughters choose a path that fulfills them rather than the world's goals for them, or mine.

    Among the quotable quotes that hit close to home for me were these. First, on prestige as siren:

    Prestige is especially dangerous to the ambitious. If you want to make ambitious people waste their time on errands, the way to do it is to bait the hook with prestige. That's the recipe for getting people to give talks, write forewords, serve on committees, be department heads, and so on. It might be a good rule simply to avoid any prestigious task. If it didn't suck, they wouldn't have had to make it prestigious.

    Ouch. But in my defense I can say that in the previous fourteen years my department had beaten all of the prestige out of being our head. When I came around to considering applying for the job, it looked considerably less prestigious than the ordinary headship. I applied for the job precisely because it needed to be done well, and I thought I was the right person to do it. I accepted the job on with a shorter-than-usual review window with no delusions of grandeur.

    Then, on prematurely settling on a career goal:

    Don't decide too soon. Kids who know early what they want to do seem impressive, as if they got the answer to some math question before the other kids. They have an answer, certainly, but odds are it's wrong.

    From the time I was seven years old until the time I went to college, I knew that I wanted to be an architect -- the regular kind that designs houses and other buildings, not the crazy enterprise integration kind. My premature optimization mostly didn't hurt me. When some people realize that they had been wrong all that time, they are ashamed or afraid to tell everyone and so stay on the wrong path. During my first year in architecture school, when I realized that as much as I liked architecture it probably wasn't the career for me, I was fortunate enough to be feel comfortable changing courses of study right away. It was a sea change for me mentally, but once it happened in mind I knew that I could tell folks.

    I somehow knew that computer science was where I should go. Again, I was fortunate not to have skewed my high school preparation in a way that made the right path hard to join; I had taken a broad set of courses that prepared me well for almost any college major, including as much math and science as I could get.

    One way that my fixation on architecture may have hurt me was in my choice of university. I optimized school selection locally by picking a university with a strong architecture program. When I decided to switch to a CS major, I ended up in a program not as strong. I certainly could have gone to a university that prepared me better for CS grad school. One piece of I advice that I'll give my daughters is to choose a school that gives you many options. Even if you never change majors, having plenty of strong programs will mean a richer ecosystem of ideas in which to swim. (I already give this advice to students interested in CS grad school, though there are different trade-offs to be made for graduate study.)

    That said, I do not regret sticking with my alma mater, which gave me a very good education and exposed me to a lot of new ideas and people. Most of undergraduate education is what the student makes of it; it's only at the boundaries of high ambition where attending a particular school matters all that much.

    Nor would I have traded my time in architecture school for a quicker start in CS. I learned a lot there that still affects how I thinking about systems, design, and education. More importantly, it was important for me to try the one thing I thought I would love before moving on to something else. Making such decisions on purely intellectual grounds is a recipe for regret.


    Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

    February 08, 2006 2:23 PM

    Functional Programming Moments

    I've been having a few Functional Programming Moments lately. In my Translation of Programming Languages course, over half of the students have chosen to write their compiler programs in Scheme. This brought back fond memories of a previous course in which one group chose to build a content management system in Scheme, rather than one of the languages they study and use more in their other courses. I've also been buoyed by reports from professors in courses such as Operating Systems that some students are opting to do their assignments in Scheme. These students seem to have really latched onto the simplicity of a powerful language.

    I've also run across a couple of web articles worth noting. Shannon Behrens wrote the provocatively titled Everything Your Professor Failed to Tell You About Functional Programming. I plead guilty on only one of the two charges. This paper starts off talking about the seemingly inscrutable concept of monads, but ultimately turns to the question of why anyone should bother learning such unusual ideas and, by extension, functional programming itself. I'm guilty on the count of not teaching monads well, because I've never taught them at all. But I do attempt to make a reasonable case for the value of learning functional programming.

    His discussion of monads is quite nice, using an analogy that folks in his reading audience can appreciate:

    Somewhere, somebody is going to hate me for saying this, but if I were to try to explain monads to a Java programmer unfamiliar with functional programming, I would say: "Monad is a design pattern that is useful in purely functional languages such as Haskell.

    I'm sure that some folks in the functional programming community will object to this characterization, in ways that Behrens anticipates. To some, "design patterns" are a lame crutch object-oriented programmers who use weak languages; functional programming doesn't need them. I like Behrens's response to such a charge (emphasis added):

    I've occasionally heard Lisp programmers such as Paul Graham bash the concept of design patterns. To such readers I'd like to suggest that the concept of designing a domain-specific language to solve a problem and then solving that problem in that domain-specific language is itself a design pattern that makes a lot of sense in languages such as Lisp. Just because design patterns that make sense in Java don't often make sense in Lisp doesn't detract from the utility of giving certain patterns names and documenting them for the benefit of ... less experienced programmers.

    His discussion of why anyone should bother to do the sometimes hard work needed to learn functional programming is pretty good, too. My favorite part addressed the common question of why someone should willingly take on the constraints of programming without side effects when the freedom to compute both ways seems preferable. I have written on this topic before, in an entry titled Patterns as a Source of Freedom. Behrens gives some examples of self-imposed cosntraints, such as encapsulation, and how breaking the rules ultimately makes your life harder. You soon realize:

    What seemed like freedom is really slavery.

    Throw off the shackles of deceptive freedom! Use Scheme.

    The second article turns the seductiveness angle upside down. Lisp is Sin, by Sriram Krishnan, tells a tale being drawn to Lisp the siren, only to have his boat dashed on the rocks of complexity and non-standard libraries again and again. But in all he speaks favorably of ideas from functional programming and how they enter his own professional work.

    I certainly second his praise of Peter Norvig's classic text Paradigms of AI Programming.

    I took advantage of a long weekend to curl up with a book which has been called the best book on programming ever -- Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. I have read SICP but the 300 or so pages I've read of Norvig's book have left a greater impression on me than SICP. Norvig's book is definitely one of those 'stay awake all night thinking about it' books.

    I have never heard anyone call Norvig's book the best of all programming books, but I have heard many folks say that about SICP -- Structure and Interpretation of Computer Programs, by Abelson and Sussman. I myself have praised Norvig's book as "one of my favorite books on programming", and it teaches a whole lot more than just AI programming or just Lisp programming. If you haven't studied, put it at or near the top of your list, and do so soon. You'll be glad you did.

    In speaking of his growth as a Lisp programmer, Krishnan repeats an old saw about the progression of a Lisp programmer that captures some of the magic of functional programming:

    ... the newbie realizes that the difference between code and data is trivial. The expert realizes that all code is data. And the true master realizes that all data is code.

    I'm always heartened when a student takes that last step, or show that they've already been there. One example comes to mind immediately: The last time I taught compilers, students built the parsing tables for the compiler by hand. One student looked at the table, thought about the effort involved in translating the table into C, and chose instead to write a program that could interpret the table directly. Very nice.

    Krishnan's article closes with some discussion of how Lisp doesn't -- can't? -- appeal to all programmers. I found his take interesting enough, especially the Microsoft-y characterization of programmers as one of "Mort, Elvis, and Einstein". I am still undecided just where I stand on claims of the sort that Lisp and its ilk are too difficult for "average programmers" and thus will never be adoptable by a large population. Clearly, not every person on this planet is bright enough to do everything that everyone else does. I've learned that about myself many, many times over the years! But I am left wondering how much of this is a matter of ability and how much is a matter of needing different and better ways to teach? The monad article I discuss above is a great example. Monads have been busting the chops of programmers for a long time now, but I'm betting that Behrens has explained it in a way that "the average Java programmer" can understand it and maybe even have a chance of mastering Haskell. I've long been told by colleagues that Scheme was too abstract, too different, to become a staple of our students, but some are now choosing to use it in their courses.

    Dick Gabriel once said that talent does not determine how good you can get, only how fast you get there. Maybe when it comes to functional programming, most of us just take too long to get there. Then again, maybe we teachers of FP can find ways to help accelerate the students who want to get good.

    Finally, Krishnan closes with a cute but "politically incorrect analogy" that plays off his title:

    Lisp is like the villainesses present in the Bond movies. It seduces you with its sheer beauty and its allure is irresistible. A fleeting encounter plays on your mind for a long, long time. However, it may not be the best choice if you're looking for a long term commitment. But in the short term, it sure is fun! In that way, Lisp is...sin."

    Forego the demon temptations of Scheme! Use Perl.

    Not.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    February 06, 2006 6:35 PM

    Deeper Things Under The Surface

    As a runner, I sometimes fall into the trap of expecting to see progress in my abilities. When I expect to experience a breakthrough every day, or every weekly. I am sure to be disappointed. First of all, breakthroughs don't happen all that often. In fact, they don't happen often at all, and when they do they seem to come in bunches -- PRs on several different routes at several different distances in the span of a couple of weeks. These spurts often tempt me into thinking that I'll just keep getting better and better!

    But when these breakthroughs occur, whether alone or in spurts, they aren't really the point. What you did that day, or in the previous week, isn't often directly responsible for the breakthrough. The big gains happen outside conscious sight. After years as a casual runner, I saw my first big improvements in speed and stamina only after many, many months of slow increases of in my daily and weekly mileage. I hadn't done anything spectacular over those months, just routine miles, day after day. This is what the gurus call 'building my aerobic base'. Deeper things were are happening under the surface.

    I think that this sort of thing happens when we learn, too. Can I memorize facts and be a whole smarter tomorrow than today? Maybe, but perhaps only for a short time. While cramming doesn't work very well for running, it may help you pass a fact-based final exam. But the gain is usually short term and, more important if you care about your professional competence, it doesn't result in a deep understanding of the area. That comes only after time, many days and weeks and months of reading and thinking and writing. Those months of routine study are the equivalent of 'building your mental base'. Eventually, you come to understand the rich network of concepts of the area. Sometimes, this understanding seems to come in a flash, but most of the time you just wake up one day and realize that you get it. You see, deeper things are happening under the surface.

    I think this is true when mastering programming or a new programming style or language, too. Most of us can really grok a language if we live with it for a while, playing with it and pushing it and having it talk back to us through the errors we make and the joy and pain we feel writing new code and changing old code. Students don't always realize this. They try to program a couple of days a week, only to be disappointed when these episodes don't take them closer to being a Master. They could, if they became part of our routine, if we gave time and contact a chance to do their magic. Deeper things can happen under the surface, but only if we allow them to.

    "Deeper things under the surface" is a catchphrase I borrow from an article of that name by Ron Rolheiser which talks about this phenomenon in human relationships. After a few months in my department's headship, I can see quite clearly how Rolheiser's argument applies to the relationship between a manager and the people for whom he works. We can all probably see how it applies to our relationships with friends, family, children, spouses, and significant others. I have to work over the long term to build relationships through contact. "Quality time" is a fine idea, and important, but when it becomes a substitute for sufficient quantity over sufficient time, it becomes a meaningless slogan.

    But I think this applies to how we develop our skills. Just as in relationships, routine contact over time matters. In this case, absence may make the heart grow fonder, but it doesn't make the heart stronger. The cliche that better captures reality is "out of sight, out of mind".

    A lot of techie students aren't comfortable with the sentiment I'm expressing here, but consider this quote from Rolheiser's article:

    What's really important will be what's growing under the surface, namely, a bond and an intimacy that's based upon a familiarity that can only develop and sustain itself by regular contact, by actually sharing life on a day-to-day basis.

    It may be sappy, but that's pretty much how I have always felt about the programming languages and topics and research problems that I mastered -- and most enjoyed, too.


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Running, Teaching and Learning

    February 02, 2006 3:47 PM

    Java Trivia: Unary Operators in String Concatenation

    Via another episode of College Kids Say the Darnedest Things, here is a fun little Java puzzler, suitable for your compiler students or, if you're brave, even for your CS1 students.

    What is the output of this snippet of code?

            int price = 75;
            int discount = -25;
            System.out.println( "Price is " + price + discount + " dollars" );
            System.out.println( "Price is " + price - discount + " dollars" );
            System.out.println( "Price is " + price + - + discount + " dollars" );
            System.out.println( "Price is " + price + - - discount + " dollars" );
            System.out.println( "Price is " + price+-+discount + " dollars" );
            System.out.println( "Price is " + price+--discount + " dollars" );
    

    Okay, we all that the second println causes a compile-time error, so comment that line out before going on. After you've made your guesses, check out the answers.

    Many students, even upper-division ones, are sometimes surprised that all of the rest are legal Java. The unary operators + and - are applied to the value of discount before it is appended to the string.

    Even some who knew that the code would compile got caught by the fact that the output of the last two printlns is not identical to the output of the middle two. These operators are self-delimiting, so the scanner does not require that they be surrounded by white space. But in the last line, the scanner is able to match a single token in -- (the unary operator for pre-decrement) rather than two unary - operators, and so it does. This is an example of how most compilers match the longest possible token whenever they have the choice.

    So whitespace does matter -- sometimes!

    This turned into a good exercise for my compiler students today, as we just last time finished talking about lexical analysis and were set to talk about syntax analysis today. Coupled with the fact that they are in the midst of writing a scanner for their compiler, we were able to discuss some issues they need to keep in mind.

    For me, this wasn't another example of Why Didn't I Know This Already?, but in the process of looking up "official" answers about Java I did learn something new -- and, like that lesson, it involved implicit type conversions of integral types. On Page 27, the Java Language Reference, says:

    The unary plus operator (+) ... does no explicit computation .... However, the unary + operator may perform a type conversion on its operand. ... If the type of the operand is byte, short, or char, the unary + operator produces an int value; otherwise the operator produces a value of the same type as its operand.

    The unary - operator works similarly, with the type conversion made before the value is negated. So, again, an operation promotes to int in order to do arithmetic. I assume that this done for the same reason that the binary operators promote bytes, shorts, and chars.

    Like many CS professors and students, I enjoy this sort of language trivia. I don't imagine that all our students do. If you'd like to see more Java trivia, check out Random Java Trivia at the Fishbowl. (You gotta love the fact that you can change the value of a string constant!) I've also enjoyed thumbing through Joshua Bloch's and Neal Grafter's Java Puzzlers. I am glad that someone knows all these details, but I'm also glad not to have encountered most of them in my own programming experience.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    January 30, 2006 5:51 PM

    Making Things Worse in the Introductory Course

    Reading Physics Face over at Uncertain Principles reminded me of a short essay I wrote a few months ago. Computer scientists get "the face", but for different reasons. Fewer people take computer science courses, in high school or college, so we don't usually get the face because of bad experiences in one of our courses. We get the face because people have had bad experiences using technology.

    (On the flip side, at least physicists don't have to listen to complaints like "Gravity didn't work for me yesterday, but it seems to be working okay right now. Why can't you guys make things work all the time?" or "Why are there so many different ways I can exert force on an object? That's so confusing!")

    But the main thrust of Chad's article struck a chord with me. A physics education group at the University of College Park found that introductory physics courses cause student expectations about physics -- about the nature of physics as an intellectual activity -- to deteriorate rather than improve! In every group, students left their intro physics thinking less like a physicist, not more.

    I know of no such study of introductory CS courses (if you do, please let me know), but I suspect that many of our courses do the same thing. For students who leave CS 1 or CS 2 unhappy or with an inaccurate view of the discipline, their primary image of computing is an overemphasis on programming drudgery. I've written several times here in the last year or so about how we might make our intro courses more engaging -- make them about something, more than "just programming" -- via the use of engaging problems from the "real world", maybe even with a domain-specific set of applications. I notice that Owen Astrachan and his colleagues at Duke are presenting a paper at SIGCSE in early March on using the science of networks as a motivating centerpiece for CS 1. Whatever the focus, we need to helps students see that computing is about concepts bigger than a for-loop. In the 1980s, we saw a "breadth-first" curriculum movement that aimed to give students a more accurate view of the discipline in their first year or two, but it mostly died out from lack of interest -- and the logistical problem that students do to master programming before they can go very far in most CS programs.

    I don't have any new answers, but seeing that physics has documented this problem with their introductory courses makes me wonder even more about the state of science education at the university.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    January 28, 2006 2:12 PM

    Reading Skills and Practice

    Few people would try to run a marathon without training for lots and lots of miles. Few would imagine that they could make their high school basketball team without playing lots of ball and working on their skills. And I doubt any of us think that we could earn a spot in the local symphony by noodling on an instrument for five or ten minutes a day.

    Yet it seems that many grade school students expect to succeed in school despite the habit of reading 10 minutes a day or less.

    A UNI colleague from the College of Education recently shared this nugget from research into children's literacy. The article he cited (Keith Stanovich, "Matthew Effects in Reading", Reading Research Quarterly, 21:360-406, 1996) reported that fewer than 10% of sixth graders read as much 40 minutes per day on average, and 50% of them read only 10 minutes per day.

    So what? That paper is over nine years old now, right? Well, first of all, I doubt that the trend in the last decade has been for students to read significantly more. The trends in television viewing (four hours per day?!) and computer usage have almost certainly prevented students from reading more, and today's students may well read less.

    Second, and most immediate to a university professor, those sixth graders are now are college students.

    How can anyone become sufficiently skilled at reading with so little practice?

    I spent more time daily as a sixth grader shooting baskets than that. No athletic coach would let her players do so little drill work on the fundamental skills of the sport. My piano teacher tells me that I should practice at least 30 minutes a day if I want to make more than glacial progress.

    And I certainly want my students to get more practice programming than that. I know that it doesn't always happen, but in some of my courses -- for example, programming languages, with weekly assignments, and compilers, with the large team project that keeps them steadily busy throughout the semester -- students have an opportunity to practice daily, or nearly so. My advice to students sounds a lot like what piano teachers tell there students: program a little bit daily, and develop habits that will serve you well the rest of your career.

    But that's programming. Many writers and writing teachers tell folks the same thing about writing... Write daily. Create habits. But do we tell our students the same thing about reading?

    One consequence of the difference in reading habits and skills among sixth graders is that, over time, the gap in reading skills and academic performance between the better readers and the poorer readers grows larger. This is sometimes called the Matthew effect, as in the title of Stanovich's paper. This name alludes to a verse in the Christian Bible, Matthew 25:29, which says, "For to everyone who has, more will be given and he will grow rich; but from the one who has not, even what he has will be taken away." Sadly, bad reading habits as a youngster tend to compound over time, and the usurious interest rate that our kids pay is a huge handicap in the classroom and workplace.

    How can anyone expect to succeed in college without being able to read really well?

    I doubt some students ever think about it much. When some do, I'm sure they figure they can just "get by" as they always have. Later, when school suddenly seems to be more difficult, they decide they don't like school and aren't cut out for the nerdy stuff.

    On the flip side, I think that many students think that they don't have to be able read well to do computer science anyway. How wrong they are. College-level computing texts contain complex ideas, written in a technical jargon that can feel like a foreign language to newcomers. If you can't read the textbook I assign, or the lecture notes I write, then you are left to understand ideas and implement them in code with little support.

    The guy who cited the statistics I mentioned above went on to say, in an e-mail to a open discussion list,

    I'm afraid that many of our students have read so little that they simply do not have the experience to cope with the textbooks used in our courses. Many of them do not choose to read except for what informational reading is necessary for daily survival.

    The implication for us as faculty is that we cannot expect students to succeed in our courses based on the reading skills they bring with them. Some faculty suggest that we should simply fail the ones who can't make the grade -- and the numbers say that that could be half or more of them. An alternative is that we have to teach them to read, at least how to read, understand, and write the material in our discipline. We will have to define words for them, both technical jargon and words that are likely outside their working vocabularies. Really, though, that's just an extension of what we already do, which is to explain the ways computer scientists and software developers see the world and think about problems.

    Most CS professors know that we have to show students examples of how to write -- programs they can read and the emulate in their own code. These programs grow in size and complexity throughout their careers as students and developers. My education colleague suggests that we might do the same sort of thing for reading, say, by talking aloud while 'reading' -- "stating what you are looking for, reading passages aloud and voicing important points, then stating questions these points raise in your mind; and voicing your expectations of what might come next".

    I do things like this with programs in class. In my compiler course, I did a low-grade version of this in our second week, working through a simple compiler that I wrote a couple of years ago. Some of my questions of the code questioned my own design style, which could well have been more object-oriented than it was. I've not often considered doing this with the textbook, though I seem to run across suggestions to do so every few years.

    (I have done this in my AI course with a journal paper -- Alan Turing's Computing Machinery and Intelligence. It is usually a great success, because students really get into Turing's claims and counteracts.)

    Find x in this geometry problem.

    A friend sent me this cartoon yesterday, and I laughed. But it's less funny if we think that our students may be unable to read deep enough to understand the problems we set before them, or to comprehend the textbook that augments our lectures in teaching students how to solve problems. Rather than dumbing down our courses or our textbooks, we may have to take new interest in teaching our students to read -- and not just code. The alternative is to "maintain high standards" and depopulate our majors at a time when our country and world desperately need more citizens educated in computing or at least prepared to live, vote, and make decisions in an increasingly technological world.

    "Find x." Okay, it's still funny.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    January 25, 2006 6:07 PM

    Camouflage in Computer Science

    Camouflage Conference poster

    A couple of months ago, I mentioned that I was submitting a proposal to speak at a local conference on camouflage. The conference is called "CAMOUFLAGE: Art, Science and Popular Culture" and will be held at my university on April 22. It is being organized by Roy Behrens, a graphic design professor here about whom I wrote a bit in that old entry. My proposal was accepted. You can see a list of many of the conference speakers on the promotional poster shown here. (Click on the image to see it full size.)

    Behrens has attracted speakers from all over the world, despite no financial support for anyone. He recently announced that Marvin Bell, the first Poet Laureate of Iowa, will open the conference by reading a new poem about camouflage, especially written for the event, named "Dead Man". I've enjoyed hearing poets at CS conferences before, most recently Robert Hass at OOPSLA, but usually they've been invited to speak on creativity or some other "right-brain" topic. I've never been at a conference with a world-premiere poetry reading... It should be interesting!

    A conference on camouflage run out of a graphics arts program might seem an odd place for a computer science professor to speak, but I thought of proposing a talk almost as soon as I heard about the conference. Computer scientists use camouflage, too, but with a twist -- as a way to transmit a message without anyone but the intended recipient being aware that a message exists. This stands in contrast to encryption, a technique for concealing the meaning of a message even as the message may be publicly known. I've studied encryption a bit in the last couple of years while preparing for and teaching an undergraduate course in algorithms, but I've not read as much on this sort of "computational camouflage", known more formally as steganography.

    This is not an area of research for me, at least yet, but it has long been an idea that intrigues me. This audience isn't looking for cutting-edge research in computer science anyway; they are more interested in the idea of hiding things via patterns in their surroundings. This conference affords me a great opportunity to learn more about steganography and other forms of data hiding -- and teach a non-technical audience about it at the same time. If you have ever taught something to beginners, you know that committing to teach a topic forces you to understand it at a deeper level than you might otherwise be able to get away with. For me, this project will be one part studying computer science, one part educating the public, and one part learning about an idea bigger than computer science -- and where CS fits into the picture.

    I have titled my talk NUMB3RS Meets The DaVinci Code: Information Masquerading as Art. (I'm proud of that title; I hope it's not too kitschy...) I figure I'll show plenty of examples, in text and images and maybe even music, and then relate steganography to the idea of camouflage more generally.

    I also figure that I will have a lot of fun writing code!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    January 21, 2006 2:50 PM

    Golden Rules, and a Favorite Textbook

    What of Tanenbaum's talk? Reading through his slides reminded me a bit of his talk. He titled his address "Ten Golden Rules for Teaching Computer Science", and they all deserve attention:

    1. Think long-term.
    2. Emphasize principles, not facts.
    3. Expect paradigm shifts.
    4. Explain how things work inside.
    5. Show students how to master complexity.
    6. Computer science is not science.
    7. Think in terms of systems.
    8. Keep theory under control.
    9. Ignore hype.
    10. Don't forget the past.

    Items 1-3 and 9-10 aim to keep us professors focused on ideas, not on the accidents and implementations of the day. Those accidents and implementations are the examples we can use to illustrate ideas at work, but they will pass. Good advice.

    Items 4-5 and 7 remind us that we and our students need to understand systems inside and out, and that complexity is an essential feature of the problems we solve. Interfaces, implementations, and interactions are the "fundamentals".

    Items 6 and 8 reflect a particular view of computing that Tanenbaum and many others espouse: computer science is about building things. I agree that CS is about building things, but I don't want us to discount the fact that, as we build things and study the results, we are in fact laying a scientific foundation for an engineering discipline. We are doing both. (If you have not yet read Herb Simon's The Sciences of the Artificial, hie thee to the library!) That said, I concur with Tanenbaum's reminder to make sure that we apply theory in a way that helps us builds systems better.

    I especially liked one slide, which related a story from his own research. One of his students implemented the mkfs program for MINIX using a complex block caching mechanism. The mechanism was so complex that they spent six months making the implementation work correctly. But Tanenbaum estimates that this program "normally runs for about 30 sec[onds] a year". How's that for unnecessary optimization!

    The other part of the talk I liked most was his pairwise comparison of a few old textbooks, to show that some authors had thought had captured and taught timeless principles, while others had mostly taught the implementations of the day. He held up as positive examples Per Brinch Hansen's operating systems text and John Hayes's architecture text. I immediately thought of my favorite data structures book ever, Thomas Standish's Data Structure Techniques.

    I am perhaps biased, as this was the textbook from which I learned data structures as a sophomore in college. In the years that have followed, many, many people have written data structures books, including Standish himself. The trend has been to make these books language-specific ("... in C", "... using Java") or to have them teach other content at the same time, such as object-oriented programming or software engineering principles. Some of these books are fine, but none seem to get to the heart of data structures as well as Standish did in 1980. And this book made me feel like I was studying a serious discipline. Its dark blue cover with spare lettering; its tight, concise text; its small, dense font; its mathematical notation; its unadorned figures... all communicated that I was studying something real, a topic that mattered. I loved writing programs to make its trees bloom and its hash tables avoid collisions.

    A valuable result of the textbook expressing all algorithms using pseudocode is that we had to learn how to write code for ourselves. We thought about the algorithms as algorithms, and then we figured out how to make PL/I programs implement them. (Yes, PL/I. Am I old yet?) We finished the course with both a solid understanding of data structures and a fair amount of experience turning ideas into code.

    Reading through someone's presentation slides can be worth the time even if they can't recreate the talk they shadow.

    Postscript: Who, to my surprise, has a CS2-related paper in the most recent issue of inroads: The SIGCSE Bulletin? Thomas Standish. It discusses a fast sorting algorithm that works in O(n) for certain kinds of data sets. I haven't studied the paper yet, but I do notice that the algorithm is given in... Java. Oh, well.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    January 21, 2006 2:38 PM

    On Presentations, Slides, and Talks

    Someone on the SIGCSE mailing list recently requested a reference for a presentation at a past conference that had suggested we teach concepts that had "staying power". He had looked through past proceedings for the paper with no success.

    It turns out there wasn't a paper, because the presentation had been the 1997 keynote talk by Andrew Tanenbaum, who that year received the SIGCSE award for Outstanding Contributions to Computer Science Education. Fortunately, Prof. Tanenbaum has posted the slides of his talk on his web site.

    Of course, reading Tanenbaum's presentation slides is not the same experience at all as hearing his talk as a live performance. Whenever I come across a conference proceedings, I run through the table of contents to see what all happened at the conference. The titles of the keynote addresses and invited talks always look so inviting, and the speakers are usually distinguished, so I turn to the listed page for a paper on the topic of the presentation... only to find at most a one-page abstract of the talk. Sometimes there is no page number at all, because the the proceedings carry no other record of the talk.

    This has made me appreciate very much those invited speakers who write a paper to accompany their talks. Of course, reading a paper is not the same experience at all as hearing a talk live, either. But written text can say so much more than the cute graphics and bullet points that constitute most speakers' presentation slides. And for a talk that is done right -- such as Alan Kay's lectures at OOPSLA 2004 -- the presentation materials are so dynamic that the slides convey even less of the talk's real value. (The best way Alan could share his talk materials would be to make the Squeak image he used available for download!)

    I think that this is why I like to write such complete notes for the talks I attend, to capture as best I can the experience and thoughts I have in real-time. Having a blog motivates me, too, as it becomes a distribution outlet that justifies even more a job done better.

    This is also why I like to write detailed lecture notes, a lá book chapters, for my courses. I write them as much for me as for my students, though the students give me an immediate reason to write and receive what I hope is a substantial benefit.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    January 16, 2006 12:46 PM

    Chairing Tutorials for OOPSLA 2006

    OOPSLA 2006 Long Logo

    After chairing the OOPSLA Educators' Symposium in 2004 and 2005, I've been entrusted with chairing the tutorials track at OOPSLA 2006. While this may not seem to have the intellectual cachét of the Educators' Symposium, it carries the responsibility of a major financial effect on the conference. If I had screwed an educators' event, I would have made a few dozen people unhappy. If I screw up the tutorials track, I could cost the conference tens of thousands of dollars!

    The call for tutorial proposals is out, with a deadline of March 18. My committee and I will also be soliciting a few tutorials on topics we really want to see covered and from folks we especially want to present. We'd like to put together a tutorial track that does a great job of helping software practitioners and academics get a handle on the most important topics in software development these days, with an emphasis on OO and related technologies. In marketing terms, I think of it as exciting the folks who already know that OOPSLA is a must-attend conference and attracting new folks who should be attending.

    I'd love to hear what you think we should be thinking about. What are the hottest topics out there, the ones we should all be learning about? Is there something on the horizon that everyone will be talking about in October? I'm thinking not only of the buzzwords that define the industry these days, but also of topics that developers really need to take their work to another level.

    Who are the best presenters out there, the ones we should be inviting to present? I'm thinking not only of the Big Names but also of those folks who simply do an outstanding job teaching technical content to professionals. We've all probably attended tutorials where we left room thinking, "Wow, that was good. Who was that guy?"

    One idea we are considering this year is to offer tutorials that help people prepare for certifications in areas such as Java and MSCD. Do you think that folks could benefit from tutorials of this sort, or is it an example trying to do too much?

    Trying to keep a great conference fresh and exciting requires a mix of old ideas and new. It's a lot like revising a good course... only in front of many more eyes!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    January 14, 2006 3:29 PM

    Just a Course in Compilers

    Soon after I arrived at UNI, my colleague Mahmoud Pegah and I were given the lead in redesigning the department's CS curriculum. We were freshly-minted Ph.D.s with big dreams. We designed a curriculum from scratch based on the ACM's Computing Curricula 1991 guidelines. In a move of perhaps subconscious rebellion, we called one of the upper-level electives "Translation of Programming Languages". This course sat in the position usually occupied by the traditional course in compilers, but at the time we thought that name too limiting. After all, we were Smalltalkers, and our programming environment used a blend of compilation of interpretation, a powerful VM, and all sorts of language processing elements. We were also Unix guys, and the Unix world encourages stepwise transformation of data through pipes made up of simple processors.

    Ever since, we have had to explain the name "Translation of Programming Languages", because no one knows what it means. When we say, "Oh, that's like a course in compilers", everyone nods in satisfaction. Most then say, "Does anyone still write compilers any more?"

    But I still think that our name choice better reflects what that course should be about, and why it is still important as more than just a great programming experience. The rise of refactoring as a standard programming practice over the last 5-7 years has caused a corresponding need for refactoring tools, and these tools rely well-known techniques from programming languages and compilers. I'm certainly glad that someone taught the folks behind IntelliJ IDEA and Eclipse learned how to write language-processing tools somewhere.

    This semester, I am teaching the "compiler course", Translation of Programming Languages. We finally have this course back in the regular rotation of courses we offer, so I get to teach it every so often. (We last offered it in Fall 2003, but we plan to offer it every third semester for the foreseeable future.) I am probably more excited than my students!

    Wirth's Compiler Construction text

    I thought long and hard choosing a textbook. To be honest, I would really have liked to use Niklaus Wirth's Compiler Construction. I love small books with big lessons, and Wirth didn't waste any words or space writing this book. In 173 pages, he teaches us how to build a compiler from beginning to end -- and gives us the full source of his compiler, describes a RISC architecture of his own design for which his compiler generates code, and gives us full source code for a simulator of the architecture. Boom, boom, boom. Of course, he doesn't have a lot of time for theory, so he covers many ideas only at a high level and moves quickly to practical issues.

    Why not choose this book? Well, I had two misgivings. First, I would have to supplement his book with a lot of outside material, both my own lecture notes and other papers. That's not a major problem, as I tend to write up extensive lecture notes, and I rarely follow big textbooks all that closely anyway. But the real killer was when I went to Amazon to check out the book's availability and saw:

    4 used & new available from $274.70

    We may be able to get by on four copies, but... $274.70?

    The standard text is, of course, Dragon book. There are plenty of copies available at Amazon, and they run a relatively svelte $95.18. (The price of textbooks these days is a subject for another blog entry, another day.) But I have always felt that the Dragon book is a bit too much for juniors and many seniors, who are my primary audience in this course. I do not think that I am contributing to the dumbing down of our curriculum by not using this classic text; indeed, I will draw many of my lecture material from my experiences with this book. But a 15-week course requires some focus, and I don't think that most of our undergraduates will get as much from the course as they might if they get lost in the deep swirls of some of Aho, Sethi, and Ullman's chapters.

    I finally settled on Louden's Compiler Construction: Principles and Practice, with which I have had no prior experience. It seems a better choice for my students, one they might be able to read and learn something from. We'll see.

    I learned a few lessons teaching this course in Fall 2003. One is: less content. If I try to cover even a significant fraction of what we know about scanning, parsing, static analysis, code generation, and optimization, my students won't get a chance to experience building a compiler from beginning to end. This robs them not only of understanding the compiler material at a deeper level but also of the occasional pain and ultimate triumph of building a non-trivial program that works. A 15-week course requires focus.

    A 15-week course in translating programming languages also requires a relatively small source language. In my previous offering, the language the students compiled was simply too big, but in the wrong ways. You don't learn much new from making your scanner and parser recognize a second or third repetition construct; you mostly find yourself just doing more grunt work. I'd rather save that time to get deeper into a later stage of the compiler, or to discuss the notion of syntactic abstractions and how o preprocess a second repetition construct away.

    That said, I do want students to do some of the grunt work. I want them to build their own scanners and parsers. Sure, we use parser generators to write these components of most compilers these days, but I want my students to really understand how some of these techniques work and to see that they can implement them and make them fly. I remember the satisfaction I felt when I wrote my first LL parser and watched it recognize a simple sorting program's worth of tokens.

    Last time I used a source language I home-brewed from a colleague's course. This time, I am going to have my students process a subset of Oberon, based in part on Wirth's now out-of-print book. It strikes a nice balance between small enough and large enough, and has a nice enough syntax to work with for a semester.

    Now that I have administrative duties, I teach only one course a semester. The result is that I get even more psyched about the course, because it is my best chance to get my hands dirty in real computer science during the term, to think deeply in the discipline. It is also gives me a chance to write code. The thing I missed most last semester in my first semester as head was having more time to program. This course offers even more: a chance to get back to a recently-dormant project, a refactoring tool for Scheme. So, I'm psyched.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    December 28, 2005 10:31 PM

    Agile as Students, but Not Always as Programmers

    I've noticed an interesting disconnect between student behavior when in the role of student and when in the role of software developer.

    When they are in the role of developer, students often fight the idea of XP's small steps: breaking tasks into small, estimable steps, writing a test for each small step, and then making sure the code passes the test before thinking about the next step. They want to think ahead, trust their experience and "intuition", write a lot of code -- and only then compile, run, and test the code. The testing is usually sloppy or, more charitably, incomplete -- at least in part because they are champing at the bit to move on to more code.

    Is this merely a matter of habit they have developed in past CS courses? Or have their years in the K-12 educational system encouraged them to rush headlong into every problem? Perhaps it is our natural human state.

    ... but when in the role of student, students tend behave much differently. They want feedback -- now. When they turn in an assignment, they want the graded result as soon as possible.

    I used to have a serious Grading Avoidance Problem, and all my students disliked it. Students who were otherwise quite happy with my course became cranky when I fell behind in returning assignments. Students who were unhappy with the course for other reasons, well, they became downright hostile.

    I'm mostly over this problem now, though I have to be alert not to backslide. Like a recovering addict, I have to face each day anew with resolve and humility. But I have a colleague for whom this issue is still a major problem, and it creates difficulties for him with his students.

    I can't blame students for wanting graded items back quickly. Those grades are the most reliable way they have of knowing where they stand in the course. Students can use this feedback to make all sorts of decisions about how and how much to study. (I do wish that more students paid more attention to the substantive feedback on their assignments and tried to use that information to improve their technique, to guide decisions about what to study.)

    So: students like frequent feedback to guide their studies.

    Many students also seem to prefer small, discrete, and detailed tasks to work on. This is especially true of students who are just learning to program, but I also see it in juniors or seniors. Many of these upper-division students do not seem to have developed confidence in their ability to solve even medium-sized problems. But when they are given a set of steps that has already been decomposed and ordered, they feel confidence enough to get the job done. They are industrious workers.

    Confessions of a Community College Dean captured my feeling about this feature of today's students when it said, "As a student, I would have been insulted by this approach. But they aren't me." I myself enjoy tackling big problems, bringing order to an unruly mass of competing concerns. Had I always been spoon-fed toy problems already broken into nearly trivial pieces, I wonder if I would have enjoyed computer science as much. I suspect I might have because, like many people who are attracted to CS, I like to create my own problems!

    So: students like to work on small, well-developed tasks whose boundaries they understand well. This, too, helps students focus their study.

    My net acquaintance Chad Orzel, a physicist at Union College, speculates on why students prefer to work in this way. The conventional wisdom is that working on many small, discrete tasks encourages them to keep up with their reading. But I think he is right when he takes this diagnosis one step further: This style of course helps students to compensate for poor time management skills. Larger, less well-defined units require students to figure out what the subtasks are, estimate how long each will take, and then actually do them all in a timely fashion. By structuring our courses as a set of smaller, discrete tasks, we do much of the preparatory work for our students. When students are first learning, this is good, but as they grow (or should be growing) we are merely reinforcing bad habits.

    It seems that we professors are enablers in a system of codependency. :-)

    Even in this regard, the relationship between student as software developer and student as student holds. As I have written before, software methodologies are self-help systems. Perhaps so are the ways we structure problems and lectures for our students.

    Once again, I can't blame students for preferring to work on small, discrete, well-defined tasks. Most people work better under these circumstances. Even those of us who love the challenge of tackling big problems usually succeed by taming the problem's complexity, reducing it to a system of smaller, independent, interacting components. That's what design is.

    Students need to learn how to design, too. When students are getting started, professors need to scaffold their work by doing much of the design for them. Then, as students increase in knowledge and skill, we need to pull the scaffolding away slowly and let students do more and more of their own design. It's easy for professors to fall into the habit of finely specifying student tasks. But in doing this thinking for them, we deny them the essential experience of decomposing tasks for themselves.

    Maybe we can leverage the agile-facing side of student behavior in helping them to develop agile habits as software developers. People aren't always able to change their behavior in one arena when they come to see that it is inconsistent with their behavior in another; we aren't wired that way. But knowing ourselves a little better is a good thing, and creates one more opportunity for our mind to grow.

    (I doubt going the other way would work very well. But it might take away some of the pressure to grade work quickly!)


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    December 23, 2005 10:19 AM

    On the Popularity of Chess

    My only non-professional blog category is on running. That reflects one of my primary personal interests in the time since I started blogging a year and a half ago. If I had started blogging twenty-five years ago, the extra category would have been on chess.

    Play it again, Sam.

    From grade school into college, but especially in high school, I played a lot of chess, more than anyone I knew. I read chess books, I played chess variants, and I surrounded myself with the miscellania of the chess world. Chess didn't seem especially popular in the late 1970s and early 1980s, but we were still in the midst of the Fischer boom, which had created a resurgence in the popularity of chess in America. The headiness of the Fischer boom years eventually passed. Independently, I got busy at college with computer science (and girls) and had less and less time to play. But I still love the game.

    A recent article in the New York Times talks about the further decline of American chess. The article's author, Jennifer Shahade, is a former U.S. women's champion and one of only a few young native-born Americans to have accomplished much in the world of chess over the last decade. There are many great minds who still play chess in the U.S. when they are young, but they are pulled toward more attractive -- and lucrative -- endeavors as they get older. Shahade points to poker, which has undergone a massive boom in popularity over the last decade, as a source of possible ideas for saving chess from its decline.

    Her suggestions are reasonable goals accompanied by simple strategies for reaching them. The chess world needs to offer ways for adults to learn the game effectively on-line and to promote the sporting, competitive element of chess. (And I can support Shahade's claim that a long tournament game of chess is much more tiring than many physical activities.) But ultimately the key is finding a way to make chess seem cool and exciting again. A breakthrough on the world stage by a player like Hikaru Nakamura could turn the trick, but it's hard to engineer that sort of event.

    Others interested in promoting chess have adopted more, um, salacious methods. Consider the World Chess Beauty Contest, reported in another NYT article on the same day as Shahade's. The WCBC tries to draw people -- well, at least teenage boys -- to chess by focusing on the many beautiful young women chess players around the world. When you are looking at pictures of these young ladies, just don't forget this: most of them are really strong players who can easily defeat the vast majority of chessplayers in the world. But, for the most part, they are not competitive with the very best women players in the world, let alone the top men.

    Carmen Kass plays chess

    Still, the thought that supermodel Carmen Kass is an avid chess player, is president of the Estonian Chess Federation last year, and is dating German grandmaster Eric Lobron makes me secretly happy. (The above picture is from the second NYT article and shows Kass playing speed chess with Indian super-grandmaster Viswanathan Anand, the world's #2 player.)

    Shahade talks about how we could heighten interest in chess tournaments by making them more thrilling, more immediate. Chess tournaments are usually arranged as round-robin or Swiss system affairs, neither of which tends to create do-or-die situations that heighten in intensity as the tournament progresses. In contrast, consider U.S. college basketball's March Madness -- and then imagine what it would be like as as a round-robin. Boring -- and much less variable in its outcome.

    We all love the mere chance that a Princeton or a UNI will come out of nowhere to upset a Duke or an Indiana, even if it doesn't happen very often. But in chess, the chances of a much lower-ranked player upsetting a better player is quite small. The standard deviation on performance at the highest levels of chess is remarkably small. When you try to cross more than one level, forget it. For example, the chance that I could beat Gary Kasparov, or even earn a draw against him, is essentially zero.

    pawn and move odds

    My proposal to increase the competitiveness of games among players of different skill levels comes from the 19th century: odds. Odds in chess are akin to handicaps in golf. For example, I might offer a weaker player "pawn odds" by removing my king's bishop's pawn before commencing play. In that case, I would probably play the white pieces; if I gave pawn odds and played black, then I would be giving "pawn and move" odds. (Moving first is a big advantage in chess.)

    Back in the 1800s, it was common for even the best players in the world to take odds from better players. America's first great chess champion, Paul Morphy made his reputation by beating most of America's best players, and many of Europe's best players at "pawn and move" odds.

    standard chess clock

    Since the advent of the chess clock, another way to handicap a chess game is to give time odds. I spent many an evening as a teenager playing speed chess with Indianapolis masters who gave me the advantage of playing in 1.5 minutes against my 5 minutes. Even at those odds, I lost more quarters than I won for a long time... But I felt like I had a chance in every game we played, despite the fact that those guys were much better than I was.

    My experience offering odds has been less successful. When I've tried to offer time odds to students and former students, they balked or outright refused. To them, playing at advantage seemed unsporting. But the result has generally been one-sided games and, within a while, one or both of us loses interest. I've never tried to give piece odds to these folks, because material seems more real to them than time and consequently the odds would seem even less sporting.

    Odds chess isn't the complete answer to making top-level chess more attractive, though it might have its place in novelty tournaments. But giving odds could make coffeehouse chess, casual games wherever, and local tournaments more interesting for more players -- and thus offer a route to increased popularity.

    This whole discussion raises another, more fundamental question. Should we even care about the popularity of chess? The conventional wisdom is yes; chess is a fun way for kids to learn to concentrate, to think strategically, to learn and deploy patterns, and so on. There is some evidence that children who play chess realize benefits from these skills in school, especially in math. But in today's world there are many more challenging and realistic games these days than there used to be, and maybe those games -- or learning to play a musical instrument, or learning to program a computer -- are better uses for our young brainpower. As a lover of the game, though, I still harbor a romantic notion that chess is worth saving.

    One thing is for certain, though. Poker is a step backwards intellectually. It may be a lot of fun and require many useful skills, but it is much shallower than chess, or even more challenging card games, such as bridge.

    The article on Jennifer Shahade that I link to above ends with a paragraph that sums up both the challenge in making chess more popular and a reason why it is worth the effort to do so:

    "People sometimes ask me if chess is fun," Jennifer says. "'Fun' is not the word I'd use. Tournament chess is not relaxing. It's stressful, even if you win. The game demands total concentration. If you mind wanders for a moment, with one bad move you can throw away everything you've painstakingly built up."

    Modern society doesn't seem to value things that aren't always fun and light, at least not as much as it could. But we could do our children a favor if we helped them learn to concentrate so deeply, to confront a challenge and painstakingly work toward a solution.

    Maybe then math and science -- and computer programming -- wouldn't seem unusually or unexpectedly hard. And maybe then more students would have the mental skills needed to appreciate the beauty, power, and, yes, fun in work that challenges them. Like computer science.


    Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

    December 22, 2005 12:18 PM

    Joining the Present

    Yesterday I received e-mail from the chair of a CS education conference. For some reason, this snippet caught my eye:

    We encourage you to visit the conference website
    [URL]

    on a regular basis for the latest information about [the conference]."

    My first thought was, "You need an RSS feed!" I am not very good about remembering to check a web site on a regular basis, at least in part because there are so many web sites in which I am interested. The result: I tend to miss out on the latest information. But with a subscription feed, my newsreader reminds me to check the sites that have new content.

    And this e-mail was for a conference in computer science education. A conference of techies, right? Why haven't they joined the 21st century?

    My second thought was, "Physician, heal thyself!" I do the same thing to my students. Here is a snippet from the home page for my fall course:

    Welcome to your portal into the world of 810:154 Programming Languages and Paradigms. These pages will complement what you find in class. You will want to check the "What's New" section often -- even when I don't mention changes in class -- to see what is available.

    Can I really expect students to check the site on their own? At least I put up lecture notes (with code) twice a week and homework once a week to create some 'pull'. But students are pulled in many different directions, and maybe a little push would help.

    This raises a question: How many of my students use a newsreader or RSS-enable web browser these days? Offering a news feed will only improve the situation if these folks take advantage of the feed. So I will have pushed the problem from one required habit to another, but at least it's a habit that consolidates multiple problems into one, and a habit that is growing in its reach. But beginning next semester, I'll ask my students if and how they use news feeds, and encourage them to give it a try.

    And I will offer a feed for my course web site. Perhaps that will help a few students stay on top of the game and not miss out on the latest information.

    You would think that we computer scientists would not be so behind the technological curve. Shouldn't we be living just a bit in the future more often? You know what they say about the cobbler's children...


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    December 21, 2005 5:07 PM

    Experiments in Art and Software

    Double Planet, by Pyracantha

    Electron Blue recently wrote about some of her experiments in art. As an amateur student of physics, she knows that these experiments are different the experiments that scientists most often perform. She doesn't always start with a "hypothesis", and when she gets done it can be difficult to tell if the experiment was a "success" or not. Her experiments are opportunities to try ideas, to see whether a new technique works out. Sometimes, that's easy to see, as when the paint of a base image dries with a grainy texture that doesn't fit the image or her next stage. Other times, it comes down to her judgment about balance or harmony.

    This is quite unlike many science experiments, but I think it has more in common with science than may at first appear. And I think it is very much like what programmers and software developers do all the time.

    Many scientific advances have resulted from what amounts to "trying things out", even without a fixed goal in mind. On my office wall, I have a wonderful little news brief called "Don't leave research to chance", taken from some Michigan State publication in the early 1990s. The article is about some work by Robert Root-Bernstein, an MSU science professor who in the 1980s spent time as a MacArthur Prize fellow studying creativity in the sciences. In particular, it lists ten ways to increase one's chances of serendipitously encountering valuable new ideas. Many of these are strict matters of technique, such as removing background "noise" that everyone else accepts or varying experimental conditions or control groups more widely than usual. But others fit the art experiment mold, such as running a reaction backward, amplifying a side reaction, or doing something else "unthinkable" just to see what happens. The world of science isn't always as neat as it appears from the outside.

    And certainly we software developers explore and play in a way that an artist would recognize -- at least we do when we have the time and freedom to do so. When I am learning a new technique or language or framework, I frequently invoke the Three Bears Pattern that I first learned from Kent Beck via one of the earliest pedagogical patterns workshops. One of the instantiations of this pattern is to use the new idea everywhere, as often and as much as you can. By ignoring boundaries, conventional wisdom, and pat textbook descriptions of when the technique is useful, the developer really learns the technique's strengths and weaknesses.

    I have a directory called software/playground/ where I visit when I just want to try something out. This folder is a living museum of some of the experiments I've tried. Some are as mundane as learning some hidden feature of Java interfaces, while others are more ambitious attempts to see just how far I can take the Model-View-Controller pattern before the resulting pain exceeds the benefits. Just opportunities to try an idea, to see how a new technique works out.

    My own experience is filled with many other examples. A grad student and I learned pair programming by giving it a whirl for a while to see how it felt. And just a couple of weeks ago, on the plane to Portland for the OOPSLA 2006 fall planning meeting, I whipped up a native Ook! interpreter in Scheme -- just because. (There is still a bug in it somewhere... )

    Finally, I suspect that web designers experiment in much the way that artists do when they have ideas about layout, design, and usability. The best way to evaluate the idea is often to implement it and see what real users think! This even fits Electron Blue's ultimate test of her experiments: How do people react to the work? Do they like it enough to buy it? Software developers know all about this, or should.

    One of the things I love most about programming is that I have the power to write the code -- to make my ideas come alive, to watch them in animated bits on the screen, to watch them interacting with other people's data and ideas.

    As different as artists and scientists and software developers are, we all have some things in common, and playful experimentation is one.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    December 03, 2005 10:33 PM

    A Milestone for Our Student Population

    I teach at a "comprehensive university", one of those primarily undergraduate institutions that falls outside of the Research I classification whose schools dominate the mind share of the computer science world. After graduation, most of our students become software practitioners at companies in Iowa and the midwestern U.S.

    Last spring, I was excited when one of my M.S. students became the first student at our university to receive a job offer from Google. I may not have been more excited than he was, but then again maybe I was... His thesis presented some novel work on algorithms for automatic route planning of snow removal operations, an important topic for road departments in my part of the world, and Google found him a promising developer for Google Maps. As an advisor and faculty member, I was filled with pride -- perhaps a false pride. Look at this validation of my work!

    Imagine my disappointment when, for a variety of personal and pragmatic reasons, my student turned Google down. I sympathized with his difficult choice, but where's the caché for me in "I was the advisor of a student who almost worked for Google"? What about my needs?

    Today my excitement was renewed when I found that a former undergraduate student of mine has accepted an offer from ThoughtWorks. In the software world, ThoughtWorks is known as one of the cooler consulting firms out there. Like Google, it seems to hire up lots of the interesting folks, especially in the OO and agile circles I frequent.

    Chris approached Thoughtworks through its ThoughtWorks University program, aimed at attracting promising recent graduates. He is just the sort of student that programs like this seek: a guy who has demonstrated potential in classwork and research, even though he doesn't come from a Big-Name Institution. His undergraduate research project on the construction of zoomable user interfaces won the award for undergraduate scientific research at our university, an award that usually goes to a student in the hard sciences.

    Universities like ours are a relatively untapped resource for advanced technology companies. Our best students are as strong as the best students anywhere. The only problem is that many of them don't have a big enough vision of what they can accomplish in the world. Turning their vision outward, toward entrepreneurial opportunities whether in start-ups or established firms, is the key. It's one of my major goals for our department over the next three years.

    I can take some pride in knowing that my courses in object-oriented programming and agile software development probably helped this student attract some attention from the folks at ThoughtWorks, but I know that it's these students themselves who make opportunities for themselves. As an educator, my job is to help them to see just how big the world of ideas and opportunities really is.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    December 01, 2005 7:39 PM

    Cardinality -- or Absolute Value?

    My friend Kris Anderson sent e-mail in response to my entry on I = k|P|. Here's part of what he said:

    So then, I got to thinking about the variable "|P|" in your equation. Absolute value? As I thought about it, Aerosmith's "Dream On" started playing in my head... "Live and learn from fools and from sages..." 'Fools' represent negative values and 'Sages' represent positive values. And since the lesson one can learn from either is of equal value, that's why 'P' must be '|P|'. Very cool.

    As I told him, Now that is cool. When I wrote my message, by |P| I meant the cardinality of the set P. Kris took P to mean not a set but the value of some interaction, either positive or negative. The ambiguous |P| then can indicate that we learn something of value from both positive influences and negative influences, like positive and negative examples in induction. I think that I'll stick with my original intention and leave credit for this re-interpretation to Kris.

    And who knew that anyone would read my entry and make a connection to Aerosmith? Different people, different familiar ideas, different connections. That reminds of something Ward Cunningham said at OOPSLA 2005.

    I learn so much from other folks reading what I write -- yet another example of the point of the article.


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

    November 28, 2005 7:22 PM

    A Formula for Intelligence

    It occurred to me today that my intelligence on any given day is roughly proportional to the number of people I talk to that day:

    I = k|P|

    This is true when I do department head stuff. The more people I get information from, the more people I share ideas with and get feedback from... the more I know, and the better I can do my job.

    It is true when I teach. When I talk to other instructors, I learn from them, both by hearing their ideas and by expressing my ideas verbally to them. When I talk to students about our classes, whether they are in my class or not, I learn a little bit about what works, what doesn't, and what makes students tick.

    It is true when I program. The agile software methods institutionalized this in the form of high degree of interaction among developers. XP raises it to the level of Standard Practice in the form of pair programming. Programmers who refuse to try pairing rarely understand what they are missing.

    The value of k depends on a lot of factors, some of which are within my daily control and some of which are in my control only over longer time horizons. On a daily basis, I can seek out the best folks possible on campus and in my circle of professional colleagues available only by e-mail. Over longer time periods, I can choose the conferences I should attend, the academic communities I can participate in, and even where I want to be employed.

    We all know the adages about hiring the smartest employees one can, about being the least accomplished person on the team, and so on. This is why: it increases the value of your k!


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

    November 23, 2005 1:46 PM

    This and That, from the Home Front

    The daily grind of the department office has descended upon me the last couple of weeks, which with the exception of two enjoyable talks (described here and here) have left me with little time to think in the way that academics are sometimes privileged. Now comes a short break full of time at home with family.

    Here are a few things that have crossed my path of late:

    • Belated "Happy Birthday" to GIMP, which turned 10 on Monday, November 21, 2005. There is a lot of great open-source software out there, much of which is older than GIMP, but there's something special to me about this open-source program for image manipulation. Most of the pros use Photoshop, but GIMP is an outstanding program for a non-trivial task that shows how far an open-source community can take us. Check out the original GIMP announcement over at Google Groups.

    • Then there is this recently renamed oldie but goodie on grounded proofs. My daughters are at the ages where they can appreciate the beauty of math, but their grade-school courses can do only so much. Teaching them bits and pieces of math and science at home, on top of their regular work, is fun but challenging.

      The great thing about explaining something to a non-expert is that you have to actually understand the topic.

      Content and method both matter. Don't let either the education college folks or the "cover all the material" lecturers from the disciplines tell you otherwise.

    • Very cool: an on-line version of John Von Neumann's Theory of Self-Reproducing Automata.

    • Finally, something my students can appreciate as well as I:

      If schedule is more important than accuracy, then I can always be on time.

      Courtesy of Uncle Bob, though I disagree with his assumption that double-entry bookkeeping is an essential practice of modern accounting. (I do not disagree with the point he makes about test-driven development!) Then again, most accountants hold double-entry bookkeeping in nearly religious esteem, and I've had to disagree with them, too. But one of my closest advisors as a graduate student, Bill McCarthy, is an accountant with whom I can agree on this issue!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Software Development, Teaching and Learning

    November 23, 2005 1:30 PM

    More on "We're Doomed"

    Ernie's 3D Pancakes picked up on the We're Doomed theme from our OOPSLA panel. There are many good blogs written by theoretical computer scientists; I especially enjoy 3D Pancakes, Computational Complexity, and The Geomblog. Recently, the CS theory community has been discussing how to communicate the value and importance of theory and algorithms to the broader CS community, the research funding agencies, and the general public, and they've struck on some of the same ideas some of my colleagues and I have been batting around concerning CS more generally. Check those blogs out for some thought-provoking discussions.

    Anyway, I found the comments on Ernie's entry that quotes me quoting Owen to be worth reading. It's good to be reminded occasionally how diverse the set of CS programs is. Michael Stiber's comment (sixth in the list) points out that CS department's have themselves to blame for many of these problems. One of my department colleagues was just at my door talking about missed opportunities to serve the university community with computing courses that matter to them. Pretty soon, we see courses like this filling a very mainstream corner of the market, and people in other departments hungering for courses in the newly-developed markets that Owen points out.

    "How may of us really need to rewrite BLAS, LAPACK, etc., routines?"

    None. But how many students are taught to write them anyway?

    And this quote speaks to the much simpler issue of how to revise our curriculum for majors. How much tougher it is for us to re-imagine what we should be doing for non-computer scientists and then figuring out how to do it.

    I just realized that by "simple" I mean that we computer scientists at least have some control over our own domain. In many ways, the task of reforming the major curriculum is tougher due to the tight cultural constraints of our community. I imagine that CS is no different than any discipline in this regard. We are a young discipline and used to the rapid change of technology -- perhaps we can find a way to become more nimble. certainly, having the conversation is a first step.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    November 18, 2005 9:37 PM

    Teaching as Subversive Inactivity

    Two enjoyable talks in one week -- a treat!

    On Tuesday, I went to the 2005 CHFA Faculty Excellence Award lecture. (CHFA is UNI's College of Humanities and Fine Arts.) The winner of the award was Roy Behrens, whose name long-time readers of this blog may recognize from past entries on a non-software patterns of design and 13 Books. Roy is a graphic arts professor at my university, and his talk reflected both his expertise in design and the style that contributed to his winning an award for faculty excellence. He didn't use PowerPoint in that stultifying bullet-point way that has afflicted the science and technology for the last decade or more... They used high-resolution images of creative works and audio to create a show that amplified his words. They also demonstrated a wonderful sense of "visual wit".

    The title of the talk was teaching as a SUBVERSIVE INACTIVITY: a miscellany, in homage to Neil Postman's famous book. When he was asked to give this talk, he wondered what he should talk about -- how to teach for 34 years without burning out? He decided to share how his teaching style has evolved away from being the center of attention in the classroom toward giving students the chance to learn.

    The talk opened with pivotal selections from works that contributed to his view on teaching. My favorite from the bunch came from "The Cult of the Fact" by Liam Hudson, a British psychologist: The goal of the teacher is to

    ... transmit an intellectual tradition of gusto, and instill loyalty to it, ...

    Behrens characterized his approach to teaching in terms of Csikszentmihalyi's model of Flow: creativity and productivity happen when the students' skills are within just the right range of the challenges given to them.

    (After seeing this talk, I'm almost afraid to use my homely line art in a discussion of it. Roy's images were so much better!)

    He called this his "Goldilocks Model", the attempt to create an environment for students that maximizes their chance to get into flow.

    What followed was a collage of images and ideas. I enjoyed them all. Here are three key points about teaching and learning from the talk.

    Aesthetic and Anesthetic

    What Csikszentmihalyi calls flow is roughly comparable to what we have historically called "aesthetic". And in its etymological roots, the antonym of 'aesthetic' is anesthetic. What an interesting juxtaposition in our modern language!

    In what ways can the atmosphere of our classrooms be anesthetic?

    extreme similarity ... HUMDRUM ... monotony
    extreme difference ... HODGEPODGE ... mayhem

    We often think of boredom as a teaching anesthetic, but it's useful to trace this back to the possibility that the boredom results from a lack of challenge. Even more important is to remember that too much challenge, too much activity, what amounts to too much distraction also serves as an anesthetic. People tend to tune out when they are overstimulated, as a coping mechanism. I am guessing that when I bore students the most, it's more likely to be from a mayhem of ideas than a monotony. ("Let's sneak in one more idea...)

    Behrens is a scholar of patterns, and he has found it useful to teach students patterns -- linguistic and visual -- suitable to their level of development, and then turn them lose in the world. Knowing the patterns changes our vision; we see the world in a new way, as an interplay of patterns.

    Through patterns, students see style and begin to develop their own. 'Style' is often maligned these days as superficial, but the idea of style is essential to understanding designs and thinking about creating. That said, style doesn't determine quality. One can find quality in every genre of music, of painting. There is something deeper than style. Teaching our principles of programming languages course this semester as I am, I hope that my students are coming to understand this. We can appreciate beautiful object-oriented programs, beautiful functional programs, beautiful logic programs, and beautiful procedural programs.

    Creativity as Postmodern

    Behrens didn't use "postmodern", but that's my shorthand description of his idea, in reference to ideas like the scrapheap challenge.

    During the talk, Behrens several times quoted Arthur Koestler's The Act of Creation. Here's one:

    The creative process is an "unlikely marriage of cabbages and kings -- of previously unrelated frames of reference or universes of discourse -- whose union will solve the previously insoluble problem." -- Koestler

    Koestler's "cabbages and kings" is an allusion to a nonsense poem in Alice in Wonderland. (Remember what Alan Perlis said about "Alice": The best book on programming for the layman ...; but that's because it's the best book on anything for the layman.") Koestler uses the phrase because Carroll's nonsense poem is just the sort of collage of mismatched ideas that can, in his view, give rise to creativity.

    Humans don't create anything new. They assemble ideas from different contexts to make something different. Creativity is a bisociation, a "sort crossing", as opposed to analytic intelligence, which is an association, a "sort-matching".

    We have to give students the raw material they need to mix and match, to explore new combinations. That is why computer science students should learn lots of different programming languages -- the more different, the better! They should study lots of different ideas, even in courses that are not their primary interest: database, operating systems, compilers, theory, AI, ... That's how we create the rich sea of possibilities from which new ideas are born.

    Problems, Not Solutions

    If we train them to respond to problems, what happens when the problem giver goes away? Students need to learn to find and create problems!

    In his early years, Behrens feared giving students examples of what he wanted, at the risk of "limiting" their creativity to what they had seen. But examples are critical, because they, too, give students the raw material they need to create.

    His approach now is to give students interesting and open problems, themes on which to work. Critique their products and ideas, frequently and openly. But don't sit by their sides while they do things. Let them explore. Sometimes we in CS tend hold students' hands too much, and the result is often to turn what is fun and creative into tedious drudgery.

    I'm beginning to think that one of the insidious ingredients in students' flagging interest in CS and programming is that we have taken the intellectual challenge out of learning to program and replaced it with lots of explanation, lots of text talking about technical details. Maybe our reasons for doing so seemed on the mark at the time -- I mean, C++ and Java are pretty complex -- but the unintended side effects have been disastrous.

    ----

    I greatly enjoyed this talk. One other good thing came out of the evening: after 13 years working on the same campus, I finally met Roy, and we had a nice chat about some ideas at the intersection of our interests. This won't be the last time we cross paths this year; I hope to present a paper at his conference Camouflage: Art, Science and Popular Culture conference, on the topic of steganography.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

    November 15, 2005 8:51 PM

    Popularizing Science through Writing and Teaching

    I have an interest in writing, both in general as a means for communication and in particular as it relates to the process of programming. So I headed over to the Earth Science department yesterday for a talk on popular science writing called "Words, Maps, Rocks: One Geologist's Path". The speaker was Marcia Bjornerud of Lawrence University, who recently published the popular geology book Reading the Rocks: The Autobiography of the Earth. The Earth Science faculty is using Reading the Rocks as reader in one of their courses, and they asked Dr. Bjornerud to speak on how she came to be a geologist and a popularizer of science.

    Bjornerud took a roundabout way into science. As a child, she had no desire to be a scientist. Her first loves were words and maps. She loved the history of words, tracing the etymology of cool words back to their origin in European languages, then Latin or Greek, and ultimately back to the source of their roots. The history of a word was like a map through time, and the word itself was this rich structure of now and then. She also loved real maps and delighted in the political, geographical, and temporal markings that populated them. Bjornerud told an engaging story about a day in grade school when snow created a vacation day. She remembers studying the time zones on the map and learning that at least two places had no official time zone: Antarctica and Svalborg, Norway.

    These reminiscences probably strike a chord in many scientists. I know that I have spent many hours poring over maps, just looking at cities and open spaces and geopolitical divisions, populations and latitudes and relative sizes. I remember passing time in an undergraduate marketing class by studying a large wall map of the US and first realizing just how much bigger Iowa (a state I had never visited but would one day call home) was than my home state of Indiana (the smallest state west of the Appalachian Mountains!) I especially love looking at maps of the same place over time, say, a map of the US in 1500, 1650, 1750, 1800, and so on. Cities grow and die; population moves inexorably into the available space, occasionally slowing at natural impediments but eventually triumphing. And words -- well, words were why I was at this talk in the first place.

    Bjornerud loved math in high school and took physics at the suggestion of friends who pointed out that the calculus had been invented in large part in order to create modern physics. She loved the math but hated the physics course; it was taught by someone with no training in the area who acknowledged his own inability to teach the course well.

    It wasn't until she took an introductory college geology course that science clicked for her. At first she was drawn to the words: esker, alluvium, pahoehoe, ... But soon she felt drawn to what the words name. Those concepts were interesting in their own right, and told their own story of the earth. She was hooked.

    We scientists can often relate to this story. It may apply to us; some of us were drawn to scientific ideas young. But we certainly see it in our friends and family members and people we meet. They are interested in nature, in how the world works, but they "don't like science". Why? Where do our schools go wrong? Where do we as scientists go wrong? The schools are a big issue, but I will claim that we as scientists contribute to the problem by not doing a good job at all of communicating to the public why we are in science. We don't share the thrill of doing science.

    A few years ago, Bjornerud decided to devote some of her professional energy to systematic public outreach, from teaching Elderhostel classes to working with grade schoolers, from writing short essays for consumption by the lay public to her book, which tells the story of the earth through its geological record.

    To write for the public, scientists usually have to choose a plot device to make technical ideas accessible to non-scientists. (We agile software developers might think of this as the much-maligned metaphor from XP.)

    Bjornerud used two themes to organize her book. The central theme is "rocks as text", reading rocks like manuscripts to reveal the hidden history of the earth. More specifically, she treats a rock as a palimpsest, a parchment on which a text was written and then scraped off, to be written on again. What a wonderful literary metaphor! It can captivate readers in a day when the intrigue of forensic science permeates popular culture.

    Her second theme, polarities, aims more at the micro-structure of her presentation. She had as an ulterior motive, to challenge the modern tendency to see dichotomy everywhere. The world is really a tangled mix of competing concepts in tension. Among the polarities Bjornerud explores are innovation versus conservation (sound familiar?) and strength versus weakness.

    Related to this motive is a desire -- a need -- to instill in the media and the public at larger an appetite for subtlety. People need to know that they can and sometimes must hold two competing ideas in their minds simultaneously. Science is a halting journey toward always-tentative conclusions.

    These themes transfer well to the world of software. The tension between competing forces is a central theme driving the literature of software patterns. Focusing on a dichotomy usually leads to a sub-optimal program; a pattern that resolves the dichotomy can improve it. And the notion of "program as text" is a recurring idea. I've written occasionally about the value in having students read programs as they learn to write them, and I'm certainly not the first person to suggest this. For example, Owen Astrachan once wrote quite a bit on apprenticeship learning through reading master code (see, for example, this SIGCSE paper). Recently, Grady Booch blogged On Writing, in which he suggested "a technical course in selected readings of software source code".

    Bjornerud talked a bit about the process of writing, revising, finding a publisher, and marketing a book. Only one idea stood out for me here... Her publisher proposed a book cover that used a photo of the Grand Canyon. But Bjornerud didn't want Grand Canyon on her cover; the Grand Canyon is a visual cliche, particularly in the world of rocks. And a visual cliche detracts from the wonder of doing geology; readers tune out when they see yet another picture of the Canyon. We are all taught to avoid linguistic cliches like the plague, but how many of us think about cliches in our other media? This seemed like an important insight.

    Which programs are the cliches of software education? "Hello, World", certainly, but it is so cliche that it has crossed over into the realm of essential kitsch. Even folks pitching über-modern Ruby show us puts "Hello, World." Bank account. Sigh, but it's so convenient; I used it today in a lecture on closures in Scheme. In the intro Java world, Ball World is the new cliche. These trite examples provide a comfortable way to share a new idea, but they also risk losing readers whose minds switch off when they see yet another boring example they've seen before.

    In the question-and-answer session that followed the talk, Bjornerud offered some partial explanations for where we go wrong teaching science in school. Many of us start with the premise that science is inherently interesting, so what's the problem?

    • Many science teachers don't like or even know science. They have never really done science and felt its beauty in their bones.

      This is one reason that, all other things being equal, an active scholar in a discipline will make a better teacher than someone else. It's also one of the reasons I favor schools of education that require majors in the content area to be taught (Michigan State) or that at least teach the science education program out of the content discipline's college (math and science education at UNI).

    • We tend explain the magic away in a pedantic way. We should let students discover ideas! If we tell students "this is all there is to it", we hide the beauty we ourselves see.

    • Bjornerud stressed the need for us to help students make a personal connection between science and their lives. She even admitted that we might help our students make a spiritual connection to science.

    • Finally, she suggested that we consider the "aesthetic" of our classrooms. A science room should be a good place to be, a fun place to engage ideas. I think we can take this one step further, to the aesthetic of our instructional materials -- our code, our lecture notes, our handouts and slides.

    The thought I had as I left the lecture is that too often we don't teach science; we teach about science. At that point, science becomes a list of facts and names, not the ideas that underlie them. (We can probably say the same thing about history and literature in school, too.)

    Finally, we talked a bit about learning. Can children learn about science? Certainly! Children learn by repetition, by seeing ideas over and over again at increasing degrees of subtlety as their cognitive maturity and knowledge level grow. Alan Kay has often said the same thing about children and language. He uses this idea as a motivation for a programming language like Smalltalk, which enables the learner to work in the same language as masters and grow in understanding while unfolding more of the language as she goes. His groups work on eToys seeks to extend the analogy to even younger children.

    Most college students and professionals learn in this way, too. See the Spiral pedagogical pattern for an example of this idea. Bjornerud tentatively offered that any topic -- even string theory!?, can be learned at almost any level. There may be some limits to what we can teach young children, and even college students, based on their level of cognitive development, their ability to handle abstractions. But for most topics most of the time -- and certainly for the basic ideas of science and math -- we can introduce even children to the topic in a way they can appreciate. We just have to find the right way to pitch the idea.

    This reminds me, too, of Owen Astrachan and his work on apprenticeship mentioned above. Owen has since backed off a bit from his claim that students should read master code, but not from the idea of reading code itself. When he tried his apprenticeship through reading master code, he found that students generally didn't "get it". The problem was that they didn't yet have the tools to appreciate the code's structures, its conventions and its exceptions, its patterns. They need to read code that is closer to their own level of programming. Students need to grow into an appreciation of master code.

    Talks like this end up touching on many disparate issues. But a common thread runs through Bjornerud's message. Science is exciting, and we scientists have a responsibility to share this with the world. We must do so in how we teach our students, and in how we teach the teachers of our children. We must do so by writing for the public, engaging current issues and helping the citizenry to understand how science and technology are changing the world in which we live, and by helping others who write for the public to appreciate the subtleties of science and to share the same through their writing.

    I concur. But it's a tall order for a busy scientist and academic. We have to choose to make time to meet this responsibility, or we won't. For me, one of my primary distractions is my own curiosity -- that which makes us a scientist in the first place drives us to push farther and deeper, to devote our energies to the science and not to the popularizing of it. Perhaps we are doomed to the G. H. Hardy's conclusion in his wonderful yet sad A Mathematician's Apology: Only after a great mind has outlived its ability to contribute to the state of our collective knowledge can -- should? will? -- it turn to explaining. (If you haven't read this book, do so soon! It's a quick read, small and compact, and it really is both wonderful and sad.)

    But I do not think we are so doomed. Good scientists can do both. It's a matter of priorities and choice.

    And, as in all things, writing matters. Writing well can succeed where other writing fails.


    Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Software Development, Teaching and Learning

    November 09, 2005 6:54 PM

    More Visibility from the Blog

    Back in March, I was contacted by my local paper for an article on local bloggers. That was, I think, the first time that someone outside my expected audience had contacted me about my blog.

    Last week, I was contacted by a gentleman named Alex Gofman, who is CTO for Moskowitz Jacobs Inc. and is writing a book on the marketing techniques of his company's founder, Howard Moskowitz. If you have been reading this blog for long, you may remember a nearly year-old article I wrote entitled What Does the iPod have in Common with Prego Spaghetti Sauce?, in which I discussed some ideas on design, style, and creativity. My thoughts there were launched by articles I'd read from Paul Graham and Malcolm Gladwell. The Gladwell piece had quoted Moskowitz, and I quoted Gladwell quoting Moskowitz.

    Mr. Gofman apparently had googled on Moskowitz's name and come across my blog as a result. He was intrigued by the connections I made between the technique used to revive Prego and the design ideas of Steve Jobs, Paul Graham, agile software methods, and Art and Fear. He contacted me by e-mail to see if I was willing to chat with him at greater depth on these ideas, and we had a nice 45-minute conversation this morning.

    It was an interesting experience talking about an essay I wrote a year ago. First of all, I had to go back and read the piece myself. The ideas I wrote about then have been internalized, but I couldn't remember anything particular I'd said then. Then, during the interview, Mr. Gofman asked me about an earlier blog entry I'd written on the rooster story from Art and Fear, and I had to scroll down to remember that piece!

    Our conversation explored the edges of my thoughts, where one can find seeming inconsistencies. For example, the artist in the rooster story did many iterations but showed only his final product. That differs from what Graham and XP suggest; is it an improvement or a step backward? Can a great designer like Jobs create a new and masterful design out of whole cloth, or does he need to go through a phase of generating prototyping to develop the idea?

    In the years since the Prego experience reported by Gladwell, Moskowitz has apparently gone away from using trained testers and toward many iterations with real folks. He still believes strongly in generating many ideas -- 50, not 5 -- as a means to explore the search space of possible products. Mr. Gofman referred to their technique as "adaptive experimentation". In spirit, it still sounds a lot like what XP and other agile methods encourages.

    I am reluctant to say that something can't happen. I can imagine a visionary in the mold of Jobs whose sense of style, taste, and the market enable him to see new ideas for products that help people to feel desires they didn't know they had. (And not in the superficial impulse sense that folks associate with modern marketing.) But I wouldn't want to stake my future or my company on me or most anyone I know being able to do that.

    The advantage of the agile methods, of the techniques promoted in Art and Fear, is that they give mere mortals such as me a chance to create good products. Small steps, continuous feedback from the user, and constant refactoring make it possible for me to try working software out and learn from my customers what they really want. I may not be able to conceive the iPod, but I can try 45 kinds of sauce to see which one strikes the subconscious fancy of a spaghetti eater.

    This approach to creating has at least two other benefits. First, it allows me to get better at what I do. Through practice, I hone my skills and learn my tools. Though sheer dent of repetition and coming into contact with many, many creations, I develop a sense of what is good, good enough, and bad. Second, just by volume I increase my chances of creating a masterpiece every now and then. No one may have seen all of my scratch work, but you can be sure that I will show off my occasional masterpiece. (I'm still waiting to create one...)

    We should keep in mind that even visionary designers like Jobs fail, too -- whether by creating a product ahead of its time, market- or technology-wise too soon, or by simply being wrong. They key to a guy like Jobs is that he keeps coming back, having learned from his experience and trying again.

    I see this mentality as essential to my work as a programmer, as a teacher, and now as an administrator. My best bet is to try many things, trust my "customer" (whether user, student, or faculty colleague) enough to let them see my work, and try to get better as I go on.

    In part as a result of our conversation this morning, Mr. Gofman -- who is a software developer trained as a computer engineer -- decided to proposing adding a chapter to his book dealing with software development as a domain for adaptive experimentation. I learned that he is an XP aficionado who understands it well enough to know that it has limits. This chapter could be an interesting short work on agile methods from a different angle. I look forward to seeing what may result.

    As Mr. Gofman and I chatted this morning, I kept thinking about how fear and creativity had come up a few times at OOPSLA this year, for example, here and here. But I didn't have a good enough reason to tell him, "You should read every article on my blog." :-) In any case, I wish him luck. If you happen to read the book, be on the look out for a quote from yours truly.


    Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

    November 08, 2005 3:00 PM

    An Index to the OOPSLA Diaries

    I have now published the last of my entries intended to describe the goings-on at OOPSLA 2005. As you can see from both the number and the length of entries I wrote, the conference provided a lot of worthwhile events and stimulated a fair amount of thinking. Given the number of entries I wrote, and the fact that I wrote about single days over several different entries and perhaps several weeks, I thought that some readers might appreciate a better-organized index into my notes. Here it is.

    Of course, many other folks have blogged on the OOPSLA'05 experience, and my own notes are necessarily limited by my ability to be in only one place at a time and my own limited insight. I suggest that you read far and wide to get a more complete picture. First stop is the OOPSLA 2005 wiki. Follow the link to "Blogs following OOPSLA" and the conference broadsheet, the Post-Obvious Double Dispatch. In particular, be sure to check out Brian Foote's excellent color commentary, especially his insightful take on the software devolution in evidence at this year's conference.

    Now, for the index:

    Day 1

    Day 2

    Day 3

    Day 4

    Day 5

    This and That

    I hope that this helps folks navigate my various meanderings on what was a very satisfying OOPSLA.

    Finally, thanks to all of you who have sent me notes to comment on this postings. I appreciate the details you provide and the questions you ask...

    Now, get ready for OOPSLA 2006.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Software Development, Teaching and Learning

    November 07, 2005 7:30 PM

    OOPSLA Day 2: A Panel of the Direction of CS Education

    The final item on the Educators Symposium program this year was a panel discussion on the future of computer science education. It was called Are We Doomed? Reframing the Discussion, in partial gest following last SIGCSE's panel debate Resolved: "Objects Early" Has Failed. After that session, Owen Astrachan commented, "We're doomed." What did he mean by this? Despite our own declarations that this is The Age of Information and that computer science is a fundamental discipline for helping the world to navigate and use the massive amount of information now being collected, we still teach CS courses in essentially the same way we always have. We still use toy examples in toy domains that don't really matter to anyone, least of all of students. We still teach our introductory courses as 100% old-style programming, with barely a nod to users, let alone to the fact that sophisticated consumers of computing grow increasingly independent of us and our inward focus.

    Owen Astrachan

    This summer, Owen said it this way:

    We have the human genome project, we have Google, we have social networks, we have contributions to many disciplines, and our own discipline, but we argue amongst ourselves about whether emacs is better than vi, Eclipse better than IDEA, C++ better than Java or Scheme, or what objects-first really means.

    I'm sorry, but if we don't change what we talk about amongst ourselves, we are doomed to a niche market while the biologists, the economists, the political scientists, etc., start teaching there own computational/modeling/programming/informatics courses. Nearly everyone turns to math for calculus, nearly everyone turns to math/stats for statistics. These are nearly universally acknowledged as basic and foundational for *lots* of disciplines. In the age of information nearly no discipline at a large scale requires computer science, certainly not programming as we teach it.

    Owen isn't as pessimistic as the provocative "We're doomed" sounds; he simply wants to cause us to think about this sea change and begin to make a change ourselves.

    I decided that this would make a great closing for my second Educators Symposium. Last year, my first symposium opened with Alan Kay challenging us all to set a higher bar for ourselves -- in computer science, and in computer science education. This year, my second symposium would close with a challenge to reinvent what we do as a discipline.

    As in so many things, the panel did not turn out quite the way I had planned. First of all, Owen wasn't able to be at OOPSLA after all, so we were without our intellectual driving force. Then, when the panel went live, discussion on the panel went in a different direction than I had planned. But it had its good points nonetheless. The panel consisted of Robert Biddle, Alistair Cockburn, Brian Marick, and Alan O'Callaghan. I owe special thanks to Alistair and Alan, who joined us on relatively short notice.

    As moderator, I had hoped to pen a wonderfully humorous introduction for for each of the panelists, to loosen things up before we dropped the gloves and got serious about changing the face of computer science. Perhaps I should have commissioned a master to ghostwrite, for in my own merely mortal hands my dream went unfulfilled. I did have a couple of good lines to use. I planned to introduce Robert as the post-modern conscience of the Educators Symposium, maybe with a visual bow to one of his previous Onward! presentations. For Brian, my tag line was to be "the panelist most likely to quote Heidegger -- and make you love him anyway". But I came up short for Alistair and Alan. Alistair's paper on software development as cooperative game playing was one possible source of inspiration. For Alan, all I could think was, "This guy has a higher depth/words ratio than most everyone I know". In the end, I played it straight and we got down to business rather quickly.

    I won't try to report the whole panel discussion, as I got sucked into it and didn't take elaborate notes. In general, rather than focusing on how CS is being reinvented and how CS education ought to be reinvented, it took a turn toward metaphors for CS education. I will describe what was for me the highlight of the session and then add a couple of key points I remember.

    Robert Biddle

    The highlight for me was Robert's presentation, titled "Deprogramming Programming". It drew heavily on the themes that he and James Noble have been pitching at recent Onward! performances, in particular that much of what we take as accepted wisdom in software development these days is created by us and is, all too often, just wrong.

    He started with a quote from Rem Koolhaus and Bruce Mau's "S, M, L, XL":

    Sous le pavé, la plage.
    (Under the paving stone, the beach.)

    There is something beneath what we have built as a discipline. We do not program only our computers... We've programmed ourselves, in many wrong ways, and it's time to undo the program.

    Narcissus

    The idea that there is a software crisis is a fraud. There isn't one now, and there wasn't one when the term 'software engineering' was coined and became a seemingly unavoidable part of our collective psyche. Robert pointed out that in Greek mythology Narcissus fell in love not with himself but with his reflection. He believes that the field of software engineering has done the same, fallen in love with an image of itself that it has created. We in CS education are often guilty of the same offense.

    Robert then boldly asserted that he loves his job as a teacher of computing and software development. If we look under the pavement, we will see that we developed a lot of useful, effective techniques for teaching students to build software: study groups, role play, and especially case studios and studios. I have written before about my own strong belief in the value of case studios and software studios, so at this point I nearly applauded.

    Finally:

    The ultimate goal of computer science is the program.

    This quote is in someways antithetical to the idea Owen and I were basing the panel on (which is one reason I wanted Robert to be on the panel!), but it also captures what many folks believe about computing. I am one of them.

    That certainly doesn't do justice to the stark imagery and text that constituted Robert's slides, nor to the distinctive verbal oratory that Robert delivers. But it does capture some of the ideas that stuck with me.

    The rest of the panel presentations were good, and the discussion afterward ranged far and wide, with a recurring them of how we might adopt a different model for teaching software development. Here are a few points that stood out:

    • Brian offered two of ideas of interest: demonstration a lá the famed on-line demo of using Ruby on Rails to build a blog engine fifteen minutes, and education patterned on that his wife received and dishes out as a doctor of veterinary medicine.

    • Alan said that we in CS tend to teach the anatomy of a language, not how to use a language. He and I have discussed this idea before, and we both believe that patterns -- elementary or otherwise -- are a key to changing this tendency.

    • Dave West chimed in from the audience with a new statement of computer science's effect on the world reminiscent of his morning presentation: "We are redefining the world in which all of us work and live."

    • Two ideas that should be a bigger part of how we teach software development are a focus on useful things and study of existing work. Various people have been using these ideas in various forms for a while now, and we have uncovered some of the problems hidden behind their promise. For example, students aren't often ready to read programs that are *too* good very quickly; they simply don't appreciate their goodness until they have developed a better sense of taste. But if we framed more of our teaching efforts around these ideas and worked to compensate for their shortcomings, we would probably be better off than doing The Same Old Thing.

    All in all, the panel did not go where I had intended for it to go. Of course, Big Design Up Front can be that way. Sometimes you have to take into account what your stakeholders want. In my case, the stakeholders were the panelists and the audience, with the audience playing the role of pseudo-customer. Judging from the symposium evaluations, many folks enjoyed the panel, so maybe it worked out all right after all.

    Of course, what I had hoped for the panel was to challenge folks in the audience to feel uneasy about the direction of the discipline, to dare to think Big Thoughts about our discipline. I don't think we accomplished that. There will be more opportunities in the future.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    November 04, 2005 8:34 AM

    OOPSLA Day 2: Ward Cunningham on Seeking and Exploiting Simplicity

    Ward Cunningham

    The keynote address for this year's Educators' Symposium was given by Ward Cunningham, one of the software folks I admire most. Of course, he's admired by many folks, including not surprisingly by the folks who organized the Wiki Symposium launched by and collocated with OOPSLA this year. As a result, Ward's keynote to our group had to be moved from its traditional first slot of the day to the slot immediately after lunch. This gave the keynote a different feel, because we all had a morning's worth of activities in which to hear Ward's words. (You can read a summary of Ward's Wiki Symposium keynote, titled "The Crucible of Creativity".)

    I introduced Ward by telling the audience how I first encountered his work. At AAAI 1995 in Montreal, I was discussing some of my ideas on teaching elementary patterns with a colleague from Wales. He said, "I have a book for you to review..." You see, he was one of the editors of the journal Expert Systems, and he had received a book for review that he didn't know what to do with. It was about software patterns, and he figured that I'd be an okay reviewer. He was probably encouraged to think this by the fact that none of his other AI friends seemed to be interested.

    The book was Pattern Languages of Program Design, and it changed my life. I had never been exposed to the nascent software patterns community, and this book introduced me to a lot of ideas and, ultimately, people who have played an important role in my academic work since.

    One of the papers in PLoPD-1 affected me immediately. It was about the CHECKS pattern language, and it was written by Ward Cunningham. These patterns were so simple, so obvious, yet as I read them I learned something about maintaining the integrity of the data in my programs. As I looked for more of Ward's work, I soon learned that what attracted me to CHECKS was a hallmark of Ward's: the ability to recognize simple ideas that offered unexpected depth and to explore that depth, patiently and humbly. So many of Ward's contributions to standard practice have been the result of a similar process: CRC cards, patterns, wiki, extreme programming, test-first design, and FIT among them.

    I asked Ward to keynote the Educators Symposium because I admired his work but because I hoped that he could teach us educators -- especially me -- a little bit of his secret gift. Maybe I could nurture the gift in myself, and maybe even pass on something to my students.

    Ward opened his talk with a reminiscence, too. As an electrical engineering major in college, he learned about Maxwell's equations from a Professor Simpson. The professor taught the equations with passion, and he expected his students to appreciate their beauty. Much of their beauty lay in their simplicity, in how that brought so many important facets together into such a small number of straightforward equations. Professor Simpson also loved the work of Richard Feynman, who himself appreciated simplicity and wrote with humanity about what he learned.

    One of Ward's first slides was The Big Slide, the take-home point of his talk. He characterized his way of working as:

    Ward's big slide: familiar + simplicity -> something new

    Immerse yourself in a pool of ideas. Come to know them. Make them your own. Then look for some humble idea lying around, something that we all understand or might. Play with this idea to see whether it gives rise to something unexpected, some new ability that changes how we see or work with the pool of familiar ideas you are swimming in.

    I have experienced the dangers inherent in looking for a breakthrough idea in the usual ways. When we look for Big Ideas, too often we look for Big Ideas. But they are hard to find. Maybe we can't recognize them from our current vantage point; maybe they are out of scale with the ideas we have right now.

    Ward pointed out another danger: Too often, we look for complete ideas when a simple but incomplete idea will be useful enough. (Sounds like You Aren't Gonna Need It!)

    Sometimes, we reach a useful destination by ignoring things we aren't supposed to ignore. Don't worry about all the things you are supposed to do, like taking your idea through to its logical conclusion right away, or polishing it up so that everyone can see how smart you are. Keep the ideas simple, and develop just what you need to move forward.

    Ward pointed out that one thing we educators do is the antithesis of his approach: the project due date. It forces students to "get done", to polish up the "final version", and to miss opportunities to explore. This is one of the good things behind longer-term projects and undergraduate research -- they allow students more time to explore before packaging everything up in a neat little box.

    How can we honor the simple in what we do? How can we develop patience? How can we develop the ability to recognize the simple idea that offers more?

    Ward mentioned Kary Mullis's Dancing Naked in the Mind Field as a nice description of the mindset that he tries to cultivate. (You may recall my discussion of Mullis's book last year.) Mullis was passionate, and he had an immediate drive to solve a problem that no one thought mattered much. When he showed his idea to his colleagues, they all said, "Yeah, that'd work, but so what?". So Mullis gave in to the prevailing view and put his idea on the shelf for a few months. But he couldn't shake the thought that his simple little not-much of an idea could lead to big things, and eventually he returned to the idea and tinkered a little more... and won a Nobel Prize.

    Extreme Programming grew out of the humble practices of programmers who were trying to learn how to work in the brave new image of Smalltalk. Ward is happy that P creates a work environment that is safe for the kind of exploration he advocates. You explore. When you learn something, you refactor. XP says to do the simplest thing that could possibly work, because you aren't gonna need it.

    Many people have misunderstood this advice to mean do something simplistic, something stupid. But it really means that you don't have to wait until you understand everything before you do anything. You can do useful work by taking small steps. These principles encourage programmers to seriously consider just what is the simplest thing that could possibly work. If you don't understand everything about your domain and your task, at least you can do this simplest thing now, to move forward. When you learn more later, you won't have over-invested in ideas you have to undo. But you will have been making progress in the meantime.

    (I don't think that Ward actually said all of these words. They are my reconstruction of what he taught me during his talk. If I have misrepresented his ideas in any way, the fault is mine.)

    Ward recalled first getting Smalltalk, which he viewed as a "sketchpad for the thinking person to write spike solutions". Have an idea? Try to program it! Everything you need or could want to change is there before you, written in the same language you are using. He and Kent realized that they now had a powerful machine and that they should "program in a powerful way". Whenever in the midst of programming they slowed down, he would ask Kent, "What is the simplest thing that could possibly work?", and they would do that. It got them over the hump, let them regain their power and keep on learning.

    This practice specializes his Big Slide from above to the task of programming:

    Ward's big slide: familiar + small step -> learn something

    Remember: You can't do it all at once. Ride small steps forward.

    This approach to programming did not always follow the most efficient path to a solution, but it always made progress -- and that's more than they could say by trying to stay on the perfect path. The surprise was that the simple thing usually turned out to be all they needed.

    Ward then outlined a few more tips for nurturing simple ideas and practices:

    Practice that which is hard.

    ... rather than avoiding it. Do it every day, not just once. An example from software development is schema evolution. It's too hard to design perfect schema up front, so design them all the time.

    Hang around after the fact.

    After other people have explored an area heavily and the conventional wisdom is that all of the interesting stuff has been done, hang around. Tread in well-worn tracks, looking for opportunities to make easier what is hard. He felt that his work on objects was a good example.

    Connect with lower level abstractions.

    That's the beauty of Smalltalk -- so many levels of abstraction, all written in the same language, all connected in a common way.

    Seek a compact expression.

    In the design of programming languages, we often resist math's greatest strength -- the ability to create compact expressions -- in favor of uniformity. Smalltalk is two to four times more compact than Java, and Ward likes that -- yet he knows that it could be more compact. What would Smalltalk with fewer tokens feel like?

    Reverse prevailing wisdom.

    Think the opposite of what everyone says is the right way, or the only way. You may not completely reverse how you do what you do, but you will be able to think different thoughts. (Deconstruction reaches the technical world!)

    From the audience, Joe Bergin pointed out his personal favorite variant of this wisdom (which, I think, ironically turns Ward's advice back on itself): Take a good idea and do it to the hilt. This is, of course, how Kent Beck initially described XP -- "Turn the knobs up to 10."

    Simplicity is not universal but personal, because it builds upon a person's own pool of familiar ideas. For example, if you don't know mathematics, then you won't be able to avail yourself of the simplicities it offers. (Or want, or know to want to.)

    At this point, Ward ended his formal talk and spent almost an hour demonstrating some of these ideas in the testbed of his current hobby project, the building of simple computers to drive an equally simple little CRT. I can't do justice to all he showed us during this phase of his keynote, but it was remarkable... He created his own assembly language and then built a little simulator for it in Java. He created macros. He played with simple patterns on the screen, and simple patterns in the wiring of his little computer board. He hasn't done anything to completion -- his language and simulator enable him to do only what he has done so far. But that has been good enough to learn some neat ideas about the machines themselves and about the programs he's written.

    While I'm not sure anything concrete came out of this portion of his talk, I could see Ward's joy for exploration and his fondness for simple questions and just-good-enough solutions while playing.

    We probably should all have paid close attention to what Ward was doing because, if the recent past has taught us anything, it is that in five years we will all be doing what Ward is playing with today.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    October 31, 2005 7:19 PM

    "Mechanistic"

    In my recent post on Gerry Sussman's talk at OOPSLA, I quoted Gerry Sussman quoting concert pianist James Boyk, and then commented:

    A work of art is a machine with an aesthetic purpose.

    (I am uncomfortable with the impression these quotes give, that artistic expression is mechanistic, though I believe that artistic work depends deeply on craft skills and unromantic practice.)

    Thanks to the wonders of the web, James came across my post and responded with to my parenthetical:

    You may be amused to learn that fear of such comments is the reason I never said this to anyone except my wife, until I said it to Gerry! Nevertheless, my remark is true. It's just that word "machine" that rings dissonant bells for many people.

    I was amused... I mean, I am a computer scientist and an old AI researcher. The idea of a program, a machine, being beautiful or even creating beauty has been one of the key ideas running through my entire professional life. Yet even for me the word "machine" conjured up a sense that devalued art. This was only my initial reaction to Sussman's sentiment, though. I also felt an almost immediate need to mince my discomfort with a disclaimer about the less romantic side of creation, in craft and repetition. I must be conflicted.

    James then explained the intention underlying his use of the mechanistic reference in way that struck close to home for me:

    I find the "machine" idea useful because it leads the musician to look for, and expect to find, understandable structures and processes in works of music. This is productive in itself, and at the same time, it highlights the existence and importance of those elements of the music that are beyond this kind of understanding.

    This is an excellent point, and it sheds light on other domains of creation, including software development. Knowing and applying programming patterns helps programmers both to seek and recognize understandable structures in large programs and to recognize the presence and importance of the code that lies outside of the patterns. This is true even -- especially!? -- for novice programmers, who are just beginning to understand programs and their structure, and the process of reading and writing them. Much of the motivation for work on the use of elementary patterns in instruction, as we try to help learn to comprehend masses of code that at first glance may seem but a jumble but which in fact bear a lot of structure within them. Recognizing code that is and isn't part of recurring structure, and understanding the role both play, is an essential skill for the novice programmer to learn.

    Folks like Gerry Sussman and Dick Gabriel do us all a service by helping us to overcome our discomfort when thinking of machines and beauty. We can learn something about science and about art.

    Thanks to James for following up on my post with his underlying insight!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    October 24, 2005 7:36 PM

    OOPSLA Day 3: Sussman on Expressing Poorly-Understood Ideas in Programs

    Gerald Sussman is renown as one of the great teachers of computing. He co-authored the seminal text Structure and Interpretation of Computer Programs, which many folks -- me included -- think is the best book ever written about computer science. Along with Guy Steele, he wrote an amazing series of papers, collectively called the "Lambda papers", that taught me as much as any other source about programming and machines. It also documented the process that created Scheme, one of my favorite languages.

    Richard Gabriel introduced on Sussman before his talk with an unambiguous statement of his own respect for the presenter, saying that when Sussman speaks, "... record it at 78 and play it back at 33." In inimitable Gabriel fashion, he summarized his career as, "He makes things. He thinks things up. He teaches things."

    Sussman opened by asserting that programming is, at its foundation, a linguistic phenomenon. It is a way in which we express ourselves. As a result, computer programs can be both prose and poetry.

    In programs, we can express different kinds of "information":

    • knowledge of world as we know it
    • models of possible worlds
    • structures of beauty
    • emotional content

    The practical value that we express in programs sometimes leads to the construction of intellectual ideas, which ultimately makes us all smarter.

    Sussman didn't say anything particular about why we should seek to express beauty and emotional content in programs, but I can offer a couple of suggestions. We are more likely to work harder and deeper when we work on ideas that compel us emotionally. This is an essential piece of advice for graduate students in search of thesis topics, and even undergrads beginning research. More importantly, I think that great truths possess a deep beauty. When we work on ideas that we think are beautiful, we are working on ideas that may ultimately pay off with deeper intellectual content.

    Sussman then showed a series of small programs, working his way up the continuum from the prosaic to the beautiful. His first example was a program he called "useful only", written in the "ugliest language I have ever used, C". He claimed that C is ugly because it is not expressive enough (there are ideas we want to express that we cannot express easily or cleanly) and because "any error that can be made can be made in C".

    His next example was in a "slightly nicer language", Fortran. Why is Fortran less prosaic than C? It doesn't pretend to be anything more than it is. (Sussman's repeated use of C as inelegant and inexpressive got a rise out of at least one audience member, who pointed out afterwards that many intelligent folks like C and find it both expressive and elegant. Sussman agreed that many intelligent folks do, and acknowledged that there is room for taste in such matters. But I suspect that he believes that folks who believe such things are misguided or in need of enlightenment. :-)

    Finally Sussman showed a program in a language we all knew was coming, Scheme. This Scheme program was beautiful because it allows us to express the truth of the domain, that states and differential states are computed by functions that can be abstracted away, that states are added and subtracted just like numbers. So, the operators + and must be generic across whatever value set we wish to compute over at the time.

    In Scheme, there is nothing special about + or . We can define them to mean what they mean in the domain where we work. Some people don't like this, because they fear that in redefining fundamental operations we will make errors. And they are right! But addition can be a fundamental operation in many domains with many different meanings; why limit ourselves? Remember what John Gribble and Robert Hass told us: you have to take risks to create something beautiful.

    This point expresses what seemed to be a fulcrum point in Sussman's argument: Mathematics is a language, not a set of tools. It is useful to us to the extent we we can express the ideas that matter to us.

    Then Sussman showed what many folks consider to be among the most beautiful pieces of code ever written, if not the most beautiful: Lisp's eval procedure written in Lisp. This may be as close to Maxwell's equations in computer science as possible.

    This is where Sussman got to the key insight of his talk, the insight that has underlay much of his intellectual contribution to our world:

    There are some things we could not express until we invented programming.

    Here Sussman distinguished two kinds of knowledge about the world, declarative knowledge and imperative knowledge. Imperative knowledge is difficult to express clearly in an ambiguous language, which all natural languages are. A programming language lets us express such knowledge in a fundamentally new way. In particular, computer programs improve our ability to teach students about procedural knowledge. Most every computer science student has had the experience of getting some tough idea only after successfully programming it.

    Sussman went further to state baldly, "Research that doesn't connect with students is a waste." To the extent that we seek new knowledge to improve the world around us, we must teach it to others, so I suppose that Sussman is correct.

    Then Sussman clarified his key insight, distinguishing computer programs from traditional mathematics. "Programming forces one to be precise and formal, without being excessively rigorous." I was glad that he then said more specifically what he means here by 'formal' and 'rigorous'. Formality refers to lack of ambiguity, while rigor referd to what a particular expression entails. When we write a program, we must be unambiguous, but we do not yet have to understand the full implication of what we have written.

    When we teach students a programming language, we are able to have a conversation with them of the sort we couldn't have before -- about any topic in which procedural knowledge plays a central role. Instead of trying to teach students to abstract general principles from the behavior of the teacher, a form of induction, we can now give them a discursive text that expresses the knowledge directly.

    In order to participate in such a conversation, we need only know a few computational ideas. One is the lambda calculus. All that matters is that you have a uniform system for naming things. "As anyone who has studied spirituality knows, if you give a name to a spirit, you have power over it." So perhaps the most powerful tool we can offer in computing is the ability to construct languages quickly. (Use that to evaluate your favorite programming language...)

    Sussman liked the Hass lecture, too. "Mr. Hass thinks very clearly. One thing I've learned is that all people, if they are good at what they do, whatever their area -- they all think alike." I suspect that this accounts for why many of the OOPSLA crowd enjoyed the Hass lecture, even if they do not think of themselves as literary or poetic; Hass was speaking truths about creativity and beauty that computer scientists know and live.

    Sussman quoted two artists whose comments echoed his own sentiment. First, Edgar Allan Poe from his 1846 The Philosophy of Composition:

    ... it will not be regarded as a breach of decorum on my part to show the modus operandi by which some one of my own works was put together. I select "The Raven" as most generally known. It is my design to render it manifest that no one point in its composition is referable either to accident or intuition -- that the work proceeded step by step, to its completion with the precision and rigid consequence of a mathematical problem.

    And then concert pianist James Boyk:

    A work of art is a machine with an aesthetic purpose.

    (I am uncomfortable with the impression these quotes give, that artistic expression is mechanistic, though I believe that artistic work depends deeply on craft skills and unromantic practice.)

    Sussman considers himself an engineer, not a scientist. Science believes in a "talent theory" of knowledge, in part because the sciences grew out of the upper classes, which passed on a hereditary view of the world. On the other hand, engineering favors a "skill theory" of knowledge; knowledge and skill can be taught. Engineering derived from craftsmen, who had to teach their apprentices in order to construct major artifacts like cathedrals; if the product won't be done in your lifetime, you need to pass on the skills needed for others to complete the job!

    The talk went on for a while thereafter, with Sussman giving more examples of using programs as linguistic expressions in electricity and mechanics and mathematics, showing how a programming language enables us -- forces us -- to express a truth more formally and more precisely than what our old mathematical and natural languages did.

    Just as most programmers have experienced the a-ha! moment of understanding after having written a program in an area we were struggling to learning, nearly every teacher has had an experience with a student who has unwittingly bumped into the wall at which computer programming forces us to express an idea more precisely than our muddled brain allows. Just today, one of my harder-working students wrote me in an e-mail message, "I'm pretty sure I understand what I want to do, but I can't quite translate it into a program." I empathize with his struggles, but the answer is: You probably don't understand, or you would be able to write the program. In this case, examination of the student's code revealed the lack of understanding that manifests itself in a program far more complex than the idea itself.

    This was a good talk, one which went a long way toward helping folks see just how important computer programming is as an intellectual discipline, not just as a technology. I think that one of the people who made a comment after the talk said it well. Though the title of this talk was "Why Programming is a Good Medium for Expressing Poorly Understood and Sloppily Formulated Ideas", the point of this talk is that, in expressing poorly-understood and sloppily-formulated ideas in a computer program, we come to understand them better. In expressing them, we must eliminate our sloppiness and really understand what we are doing. The heart of computing lies in the computer program, and it embodies a new form of epistemology.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    October 20, 2005 6:46 PM

    OOPSLA This and That, Part 2

    As always, OOPSLA has been a constant font of ideas. But this year's OOPSLA seems to have triggered even more than its usual share. I think that is a direct result of the leadership of Dick Gabriel and Ralph Johnson, who have been working for eighteen months to create a coherent and focused program. As much as I have already written this week -- and I know that some of my articles have been quite long; sorry for getting carried away... -- I have plenty of raw material to keep me busy for a while, including the rest of the Educators Symposium, Gerald Sussman's invited talk, my favorite neologism of the week, and the event starting as I type this: Grady Booch's conference-closing talk on his ambitious project to build a handbook of software architecture.

    For now, I'd like to share just a few ideas from the two panels I attended in the middle part of this fine last day of OOPSLA.

    The Echoes panel was aimed at exploring the echoes of the structured design movement of the late 1970s. It wasn't as entertaining or as earth-shaking as it might have been given its line-up of panelists, but I took away two key points:

    • Kent Beck said that he recently re-read Structured Design and was amazed how much of the stuff we talk about today is explained in that book. I remember reading that book for the first time back in 1985, after reading Structured Analysis and System Specification in my software engineering senior sequence. They shaped how I thought about software construction.

      I plan to re-read both books in the next year.

    • Grady Booch said that no one reads code, not like folks in other disciplines read the literature of their disciplines. And code is in many ways the real literature that we are creating. I agree with Grady and have long thought about how the CS courses we teach could encourage students to read real programs -- say, in operating systems, where students can read Linux line by line if they want. Certainly, I do this my compilers course, but not with a non-trivial program. (Well, my programming languages students usually read a non-trivial interpreter, a Scheme interpreter written in Scheme modeled on McCarthy's original Lisp interpreter. That program is small but not trivial. It is the Maxwell equations of computing.)

      I am going to think about how to work this idea more concretely into the courses I teach in the next year.

    Like Echoes, the Yoshimi Battles the Pink Robots panel -- on the culture war between programmers and users -- didn't knock my socks off, but Brian Foote was in classic form. I don't think that he was cast in the role of Yoshimi, but he defended the role of the programmer qua programmer.

    • His position statement quoted Howard Roark, the architect in Ayn Rand's The Fountainhead: "I don't build in order to have clients. I have clients in order to build."

      I immediately thought of a couple of plays on the theme of this quote:

      I don't teach to have students. I have students to teach.
      I don't blog to have readers. I have readers to blog.

    • Brian played on words, too, but not Howard Roark's. He read, in his best upper-crust British voice, the lyrics of "I Write the Code", with no apologies at all to Barry Manilow.
      I am hacker, and I write the code.

    • And this one was Just Plain Brian:
      You know the thing that is most annoying about users is that they have no appreciation for the glory, the grandeur, or the majesty of living inside the code. It is my cathedral.

    Oh: on his way into the lecture to give his big talk, Grady Booch walked by, glanced at my iBook, and said, "Hey, another Apple guy. Cool."

    It's been a good day.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    October 19, 2005 8:17 PM

    More on Safety and Freedom in the Extreme

    giving up liberty for safety

    In my entry on Robert Hass's keynote address, I discussed the juxtaposition of 'conservative' and 'creative', the tension between the desire to be safe and the desire to be free, between memory and change. Hass warned us against the danger inherent in seeking safety, in preserving memory, to an extreme: blindness to current reality. But he never addressed the danger inherent in seeking freedom and change to the exclusion of all else. I wrote:

    There is a danger in safety, as it can blind us to the value of change, can make us fear change. This was one of the moments in which Hass surrendered to a cheap political point, but I began to think about the dangers inherent in the other side of the equation, freedom. What sort of blindness does freedom lead us to?

    giving up safety for liberty

    During a conversation about the talk with Ryan Dixon, it hit me. The danger inherent in seeking freedom and change to an extreme untethered idealism. Instead of "Ah, the good old days!", we have, "The world would be great if only...". When we don't show proper respect to memory and safety, we become blind in a different way -- to the fact that the world can't be the way it is in our dreams, that reality precludes somehow our vision.

    That doesn't sound so bad, but people sometimes forget not to include other people in their ideal view. We sometimes become so convinced by our own idealism that we feel a need to project it onto others, regardless of their own desires. This sort of blindness begins to look in practice an awful lot like the blindness of overemphasizing safety and memory.

    Of course, when discussing creative habits, we need to be careful not to censor ourselves prematurely. As we discussed at Extravagaria, most people tend toward one extreme. They need encouragement to overcome their fears of failure and inadequacy. But that doesn't mean that we can divorce ourselves from reality, from human nature, from the limits of the world. Creativity, as Hass himself told us, thrives when it bumps into boundaries.

    Being creative means balancing our desire for safety and freedom. Straying too far in either way may work in the short term, but after too long in either land we lose something essential to the creative process.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    October 19, 2005 6:14 PM

    OOPSLA Day 4: Mary Beth Rosson on the End of Users

    As you've read here in the past, I am one of a growing number of CS folks who believe that we must expand the purview of computer science education far beyond the education of computer scientists and software developers. Indeed, our most important task may well lie in the education of the rest of the educated world -- the biologists and sociologists, the economists and physicists, the artists and chemists and political scientists whose mode of work has been fundamentally altered by the aggregation of huge stores of data and the creation of tools for exploring data and building models. The future of greatest interest belongs not to software development shops but to the folks doing real work in real domains.

    Mary Beth Rosson

    So you won't be surprised to know how excited I was to come to Mary Beth Rosson's Onward! keynote address called "The End of Users". Mary Beth has been an influential researcher across a broad spectrum of problems in OOP, HCI, software design, and end-user programming, all of which have had prominent places at OOPSLA over the years. The common theme to her work is how people relate to technology, and her methodology has always had a strong empirical flavor -- watching "users" of various stripes and learning from their practice how to better support them.

    In today's talk, Mary Beth argued that the relationship between software developers and software users is changing. In the old days, we talked about "end-user programming", those programming-like activities done by those without formal training in programming. In this paradigm, end users identify requirements on programs and then developers produce software to meet the need. This cycle occurs at a relatively large granularity, over a relatively long time line.

    But the world is changing. We now find users operating in increasingly complex contexts. In the course of doing their work, they frequently run into ad hoc problems for their software to solve. They want to integrate pieces of solution across multiple tools, customize their applications for specific scenarios, and appropriate data and techniques from other tools. In this new world, developers must produce components that can be used in an ad hoc fashion, integrated across apps. Software developers must create knowledge bases and construction kits that support an interconnected body of problems. (Maybe the developers even strive to respond on demand...)

    These are not users in the traditional sense. We might call them "power users", but that phrase is already shop-worn. Mary Beth is trying out a new label: use developers. She isn't sure whether this label is the long-term solution, but at least this name recognizes that users are playing an increasingly sophisticated role that looks more and more like programming.

    What sorts of non-trivial tasks do use developers do?

    An example scenario: a Word user redefining the 'normal' document style. This is a powerful tool with big potential costs if done wrong.

    Another scenario: an Excel user creates a large spreadsheet that embodies -- hides! -- a massive amount of computation. (Mary Beth's specific example was a grades spreadsheet. She is a CS professor after all!)

    Yet another: an SPSS defines new variables and toggles between textual programming and graphical programming.

    And yet another: a FrontPage user does visual programming of a web page, with full access to an Access database -- and designs a database!

    Mary Beth summarized the characteristics of use developers as:

    • comfortable with a diverse array of software apps and data sources
    • work with them multiple apps in parallel and so want to pick and choose among functionality at any time, hooking components up and configuring custom solutions on demand.
    • working collaboratively, with group behaviors emerging
    • see the computer as a tool, not an end; it should not get in their way

    Creating use developers has potential economic benefits (more and quicker cycles getting work done) and personal benefits (more power, more versatility, higher degree of satisfaction).

    But is the idea of a use developer good?

    Mary Beth quoted an IEEE Software editor whose was quite dismissive of end users. He warned that they do not systematically test their work, that they don't know to think about data security and maintainability, and -- when they do know to think about these issues -- they don't know *how* to think about them. Mary Beth thinks these concerns are representative of what folks in the software world and that we need to be attentive to them.

    (Personally, I think that, while we certainly should be concerned about the quality of the software produced by end users, we also must keep in mind that software engineers have a vested interested in protecting the notion that only Software Engineers, properly trained and using Methods Anointed From On High are capable of delivering software of value. We all know of complaints from the traditional software engineering community about agile software development methods, even when the folks implementing and using agile methods are trained in computing and are, presumably, qualified to make important decisions about the environment in which we make software.)

    Mary Beth gave an example to illustrate the potential cost inherent in the lack of dependability -- a Fannie Mae spreadsheet that contained a $1.2B error.

    As the base of potential use developers grows so do the potential problems. Consider just the spreadsheet and database markets... By 2012, the US Department of Labor estimates that there will be 55M end users. 90% of all spreadsheets contain errors. (Yes, but is that worse or better than in programs written by professional software developments?) The potential costs are not just monetary; they can be related to the quality of life we all experience. Such problems can be annoying and ubiquitous: web input forms with browser incompatibilities; overactive spam filters that lose our mail; Word styles that break the other formatting in user documents; and policy decisions based on research findings that themselves are based on faulty analysis due to errors in spreadsheets and small databases.

    Who is responsible for addressing these issues? Both! Certainly, end users must take on the responsibility of developing new habits and learning the skills they need to use their tools effectively and safely. But we in the software world need to recognize our responsibilities:

    • to build better tools, to build the scaffolding users need to be effective and safe users. The tools we build should offer the right amount of help to users who are in the moment of doing their jobs.
    • to promote a "quality assurance" culture among users. We need to develop and implement new standards for computing literacy courses.

    How do we build better tools?

    Mary Beth called them smarter tools and pointed to a couple of the challenges we must address. First, much of the computation being done in tools is invisible, that is, hidden by the user interface. Second, people do not want to be interrupted while doing their work! (We programmers don't want that; why should our users have to put up with it?)

    Two approaches that offer promise are interactive visualization of data and minimalism. By minimalism, she means not expanding the set of issues that the user has concern herself with by, say, integrating testing and debugging into the standard usage model.

    The NSF is supporting a five-school consortium called EUSES, End Users Shaping Effective Software, who are trying these ideas out in tool and experiment. Some examples of their work:

    • CLICKS is a drag-and-drop, design-oriented web development environment.

    • Whyline is a help system integrated directly into Alice's user environment. The help system monitors the state of the user's program and maintains a dynamic menu of problems they may run into.

    • WYSIWYT is a JUnit-style interface for debugging spreadsheets, in which the system keeps an eye on what cells have and have not been verified with tests.

    How can we promote a culture of quality assurance? What is the cost-benefit trade-off involved for the users? For society?

    Mary Beth indicated three broad themes we can build on:

    • K-12 education: making quality a part of schoolchildren's culture of computer use
    • universal access: creating tools aimed at specific populations of users
    • communities of practice: evolving reflective practices within the social networks of users

    Some specific examples:

    • Youngsters who learn by debugging in Alice. This is ongoing work by Mary Beth's group. Children play in 3D worlds that are broken, and as they play the child users are invited to fix the system as they play. You may recognize this as the Fixer Upper pedagogical pattern, but in a "non-programming" programming context.

    • Debugging tools that appeal to women. Research shows that women take debugging seriously, but they tend to use strategies in their heads more than the tools available in the typical spreadsheet and word processing systems. How do we invite women with lower self-confidence to avail themselves of system tools? One experimental tool does this by letting users indicate "not sure" when evaluating correctness of a spreadsheet cell formula.

    • Pair programming community simulations. One group has has a Sim City-like world in which a senior citizen "pair programs" with a child. Leaving the users unconstrained led to degeneration, but casting the elders as object designers and the children as builders led to coherent creations.

    • Sharing and reuse in a teacher community. The Teacher Bridge project has created a collaborative software construction tool to support an existing teacher community. The tool has been used by several groups, including the one that created PandapasPond.org. This tool combines a wiki model for its dynamic "web editor" and more traditional model for its static design tool (the "folder editor"). Underneath the service, the system can track user activity in a variety of ways, which allows us to explore the social connections that develop within the user community over time.

    The talk closed with a reminder that we are just beginning the transition from thinking of "end users" to thinking of "use developers", and one of our explicit goals should be to try to maximize the upside, and minimize the downside, of the world that will result.

    For the first time in a long time, I got up to ask a question after one of the big talks. Getting up to stand in line at an aisle mic in a large lecture hall, to ask a question in front of several hundred folks, seems a rather presumptuous act. But my interest in this issue is growing rapidly, and Mary Beth has struck on several issues close to my current thinking.

    My question was this: What should university educators be thinking about with regard to this transition? Mary Beth's answer went in a way I didn't anticipate: We should be thinking about how to help users develop the metacognitive skills that software developers learn within our culture of practice. We should extend cultural literacy curricula to focus on the sort of reflective habits and skills that users need to have when building models. "Do I know what's going on? What could be going wrong? What kinds of errors should I be watching for? How can I squeeze errors out of my program?"

    After the talk, I spent a few minutes discussing curricula issues more specifically. I told her about our interest in reaching out to new populations of students, with the particular example of a testing certificate that folks in my department are beginning to work on. This certificate will target non-CS students, the idea being that many non-CS students end up working as testers in software development for their domain, yet they don't understand software or testing or much anything about computing very deeply. This certificate is still aimed at traditional software development houses, though I think it will bear the seeds of teaching non-programmers to think about testing and software quality. If these folks ever end up making a spreadsheet or customizing Word, the skills they learn here will transfer directly.

    Ultimately, I see some CS departments expanding their computer literacy courses, and general education courses, to aim at use developers. Our courses should treat them with the same care and respect as we treat Programmers and Computer Scientists. The tasks users do are important, and these folks deserve tools of comparable quality.

    Three major talks, three home runs. OOPSLA 2005 is hot.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    October 19, 2005 10:11 AM

    OOPSLA Day 1: Writing Exercises at Extravagaria

    I am nobody:
    A red sinking autumn sun
    Took my name away.

    -- Richard Wright

    As I noted before, I nearly blew off Sunday, after a long and tiring two days before. As you might have gathered from that same entry, I am happy that I did not. The reasons should be obvious enough: cool ideas happen for me only when I am engaged with ideas, and the people and interactions at Extravagaria were a source of inspiration that has remained alive with me throughout the rest of the conference.

    In the afternoon of the workshop, we did two group exercises to explore issues in creativity -- one in the realm of writing poetry, and one in the realm of software design.

    Gang-Writing Haiku

    Haiku is a simple poetic form that most of us learn as schoolchildren. It is generally more involved than we learn in school, with specific expectations on the content of the poems, but at its simplest it is a form of three lines, consisting of 5, 7, and 5 syllables, respectively.

    If I understood correctly, a tonka is a poem constructed by following a haiku with a couplet in which line is 7 syllables. We can go a step further yet, by connecting a sequence of tonkas into a renga. John called the renga the "stretch limo" of haiku. Apparently, the Japanese have a traditional drinking game that requires each person to write a verse of a growing renga in turn, taking a drink with each verse. The poems may degenerate, but the evening is probably not a complete loss...

    Our first exercise after lunch was a variation of this drinking game, only without the drinking. We replaced the adult beverages with two features intended to encourage and test our creativity. First, we were given one minute or less to write each verse. Second, when we passed the growing poem on to the next writer, we folded it over so that the person could see only the verse we had just written.

    Rather than start with scratch, John seeded our efforts with haiku written by the accomplished American novelist Richard Wright. In the last eighteen months of his life, Wright became obsessed with haiku, writing dozens a day. Many of these works were published in a collection after his death. John gave each of person a haiku from this collection. One of them, my favorite, appears at the top of this essay.

    Then we were off, gang-writing poetry. My group consisted of Brian Foote, Joe Yoder, Danny Dig, Guy Steele, and me. Each of us started with a Wright haiku, wrote a couplet in response to it, folded Wright's stanza under, and passed the extended poem on to continue the cycle. After a few minutes, we had five renga. (And yet we were sober, though the quality of the poetry may not have reflected that. :-)

    The second step of the exercise was to select our favorite, the one we thought had the highest quality. My group opted for a two-pass process. Each of us cast a vote for our two favorites, and the group then deliberated over the the top two vote-getters. We then had the opportunity to revise our selection before sharing it with the larger group. (We didn't.) Then each of the groups read its best product to the whole group.

    My group selected the near-renga we called Syzygy Matters (link to follow) as our best. This was not one of my two favorites, but it was certainly in line with my choices. One poem I voted for received only my vote, but I won't concede that it wasn't one of our best. I call it Seasons Cease.

    Afterwards, we discussed the process and the role creativity played.

    • Most of us tried to build on the preceding stanza, rather than undo it.

    • This exercise resembles a common technique in improvisational theater. There, the group goes through rounds of one sentence per person, building on the preceding sentences. Sometimes, the participants cycle through these conjunctions in order: "Yes, and...", "No, and...", "Yes, but...", and "No, but...".

    • Time pressure matters.

    • Personally, I noticed that by moving so fast that I had no chance to clear my mind completely, a theme developed in my mind that carried over from renga to renga. So my stanzas were shaped both by the stanza I was handed and by the stanza I wrote in the previous round.

    • Guy was optimistic about the process but pessimistic about the products. The experience lowered his expectations for the prospects for groups writing software by global emergence from local rules.

    • We all had a reluctance to revise our selected poems. The group censored itself, perhaps out of fear of offending whoever had written the lines. (So much for Common Code Ownership.) Someone suggested that we might try some similar ideas for the revision process. Pass all the poems we generated to another group, which would choose the best of the litter. Then we pass the poem on to a third group, which is charged with revising the poem to make it better. This would eliminate the group censorship effect mentioned above, and it would also eliminate the possibility that our selection process was biased by personal triggers and fondness.

    • Someone joked that we should cut the first stanza, the one written by Wright!, because it didn't fit the style of the rest of the stanzas. Joke aside, this is often a good idea. Often, we need to let go of the triggers that initially caused us to write. That can be true in our code, as well. Sometimes a class that appears early in a program ultimately outlives its utility, its responsibilities distributed across other more vital objects. We shouldn't be afraid of cutting the class,but sometimes we hold an inordinate attachment to the idea of the class.

    • To some, this exercise felt more like a white-board design session than a coding exercise. We placed a high threshold on revisions, as we often do for group brainstorm designs.

    • Someone else compared this to design by committee, and to the strategy of separating the coding team from the QA team.

    Later, we discussed how, in technical writing and other non-fiction, our goal is to make the words we use match the truth as much as possible, but sometimes an exaggeration can convey truth even better. Is such an exaggeration "more true" than the reality, by conveying better the feel of a situation than pure facts would have? Dick used the re-entry season from Apollo 13 as an example.

    (Aside: This led to a side discussion of how watching a movie without listening to its soundtrack is usually avery different experience. Indeed, most directors these days use the music as an essential story-telling device. What if life were like that? Dick offered that perhaps we are embarking on a new era in which the personal MP3 player does just that, adding a soundtrack to our lives for our own personal consumption.)

    A good story tells the truth better than the truth itself. This is true in mathematical proofs, where the proof tells a story quite different from the actual process by which a new finding is reached. It is true of papers on software system designs, of software patterns. this is yet another way in which software and computer science are like poetry and Mark Twain.

    A Team Experiment with Software Design

    The second exercise of the afternoon asked four "teams" -- three of size four, and the fourth being Guy Steele alone -- to design a program that could generate interesting Sudoku puzzles. Half way through our hour, two teams cross-pollinated in a Gabriel-driven episode of crossover.

    I don't have quite as much to save about this exercise. It was fun thinking about Sudoku, a puzzle I've started playing a bit in the last few weeks. It was fun watching working with Sudoku naifs wrap their games around the possibilities of the game. It was especially fun to watch a truly keen mind describe how he attacked and solved a tough problem. (I saved Guy's handwritten draft of his algorithm. I may try to implement it later. I feel like a rock star groupie...)

    The debrief of this exercise focused on whether this process felt creative in the sense that writing haiku did, or was it more like the algorithm design exercise one might solve on a grad school exam, taken from Knuth. Guy pointed out that these are not disjoint propositions.

    What feels creative is solving something we don't yet understand -- creativity lies in exploring what do not understand, yet. For example, writing a Sudoku solver would have involved little or no creativity for most of us, because it would be so similar to backtracking programs we have written before, say, to solve the 8-queens puzzle.

    In many ways, these exercises aren't representative of literary creativity, in several significant ways. Most writers work solo, rather than in groups. Creators may work under pressure, but not often in 1-minute compressions. But sprints of this sort can help jolt creativity, and they can expose us to models of work,models we can adopt and adapt.

    One thing seems certain: Change begets creativity. Robert Hass spoke of the constant and the variable, and how -- while both are essential to creativity -- it is change and difficulty that are usually the proximate causes of the creative act. That's why cross-pollination of teams (e.g., pair programmers) works, and why we should switch tools and environments every so often, to jog the mind to open itself to creating something new.


    Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

    October 18, 2005 4:04 PM

    OOPSLA Day 3: Robert Hass on Creativity

    Robert Hass, former poet laureate of the US

    With Dick Gabriel and Ralph Johnson leading OOPSLA this year, none of us were that the themes of the conference were creativity and discovery. This theme presented itself immediately in the conference's opening keynote speaker, former poet laureate Robert Hass. He gave a marvelous talk on creativity.

    Hass began his presentation by reading a poem (whose name I missed) from Dick's new chapbook, Drive On. Bob was one of Dick's early teachers, and he clearly reveled in the lyricism, the rhythm of the poem. Teachers often form close bonds with their students, however long or short the teaching relationship. I know the feeling from both sides of the phenomenon.

    He then described his initial panic at thought of introducing the topic of creativity to a thousand people who develop software -- who create, but in a domain so far from his expertise. But a scholar can find ways to understand and transmit ideas of value wherever they live, and Hass is not only a poet but a first-rate scholar.

    Charles Dickens burst on scene with publication of The Pickwick Papers. With this novel, Dickens essentially invented the genre of the magazine-serialized novel. When asked how he created a new genre of literature, he said simply, "I thought of Pickwick."

    I was immediately reminded of something John Gribble said in his talk at Extravagaria on Sunday: Inspiration comes to those already involved in the work.

    Creativity seems to happen almost with cause. Hass consulted with friends who have created interesting results. One solved a math problem thought unsolvable by reading the literature and "seeing" the answer. Another claimed to have resolved the two toughest puzzles in his professional career by going to sleep and waking up with the answer.

    So Hass offered his first suggestion for how to be creative: Go to sleep.

    Human beings were the first animals to trade instinct for learning. The first major product of our learning was our tools. We made tools that reflected what we learned about solving immediate problems we faced in life. These tools embodied the patterns we observed in our universe.

    We then moved on to broader forms of patterns: story, song, and dance. These were,according to Hass, the original forms of information storage and retrieval, the first memory technologies. Eventually, though, we created a new tool, the printing press, that made these fundamentally less essential -- less important!? And now the folks in this room contribute to the ultimate tool, the computer, that in many ways obsoletes human memory technology. As a result, advances in human memory tech have slowed, nearly ceased.

    The bulk of Hass's presentation explored the interplay between the conservative in us (to desire to preserve in memory) and the creative in us (the desire to create anew). This juxtaposition of 'conservative' and 'creative' begets a temptation for cheap political shots, to which even Hass himself surrendered at least twice. But the juxtaposition is essential, and Hass's presentation repeatedly showed the value and human imperative for both.

    Evolutionary creativity depends on the presence having a constant part and a variable part, for example, the mix of same and different in an animal's body, in the environment. The simultaneous presence of constant and variable is the basis of man's physical life. It is also the basis of our psychic life. We all want security and freedom, in an unending cycle Indeed, I believe that most of us want both all the time, at the same time. Conservative and creative, constant and variable -- we want and need both.

    Humans have a fundamental desire for individuation, even while still feeling a oneness with our mothers, our mentors, the sources of our lives. Inspiration, in a way, is how a person comes to be herself -- is in part a process of separation.

    "Once upon a time" is linguistic symbol, the first step of the our separation from the immediate action of reading into a created world.

    At the same time, we want to be safe and close to, free and faraway. Union and individuation. Remembering and changing.

    Most of us think that most everyone else is more creative than we are. This is a form of the fear John Gribble spoke about on Sunday, one of the blocks we must learn to eliminate from our minds -- or at least fool ourselves into ignoring. (I am reminded of John Nash choosing to ignore the people his mind fabricates around him in A Beautiful Mind.)

    Hass then told a story about the siren song from The Odyssey. It turns out that most of the stories in Homer's epics are based in "bear stories" much older than Homer. Anyway, Odysseus's encounter with the sirens is part of a story of innovation and return, freedom on the journey followed by a return to restore safety at home. Odysseus exhibits the creativity of an epic hero: he ties himself to the mast so that he can hear the sirens' song without having to take ship so close to the rocks.

    According to Hass, in some versions of the siren story, the sirens couldn't sing -- the song was only a sailors' legend. But they desire to hear the beautiful song, if it exists. Odysseus took a path that allowed him both safety and freedom, without giving up his desire.

    In preparing for this talk,hass asked himself, "Why should I talk to you about creativity? Why think about it all?" He identified at least four very good reasons, the desire to answer these questions:

    • How can we cultivate creativity in ourselves?
    • How can we cultivate creativity in our children?
    • How can we identify creative people?
    • How can we create environments that foster creativity?

    So he went off to study what we know about creativity. A scholar does research.

    Creativity research in the US began when academic psychologists began trying to measure mental characteristics. Much of this work was done at the request of the military. As time went by, the number of characteristics, perhaps in correlation of research grants awarded by the government. Creativity is, perhaps, correlated with salesmanship. :-) Eventually, we had found several important characteristics, including that there is little or no correlation between IQ and creativity. Creativity is not a province of the intellectually gifted.

    Hass cited the research of Howard Gardner and Mihaly Csikszentmihalyi (remember him?), both of whom worked to identify key features of the moment of a creative change, say, when Dickens thought to publish a novel in serial form. The key seems to be immersion in a domain, a fascination with domain and its problem and possibilities. The creative person learns the language of the domain and sees something new. Creative people are not problems solvers but problem finders.

    I am not surprised to find language at the center of creativity! I am also not surprised to know that creative people find problems. I think we can save something even stronger, that creative people often create their own problems to solve. This is one of the characteristics that biases me away from creativity: I am a solver more than a finder. But thinking explicitly about this may enable me to seek ways to find and create problems.

    That is, as Hass pointed out earlier, one of the reasons for thinking about creativity: ways to make ourselves more creative. But we can use the same ideas to help our children learn the creative habit, and to help create institutions that foster the creative act. He mentioned OOPSLA as a social construct in the domain of software that excels at fostering creative. It's why we all keep coming back. How can we repeat the process?

    Hass spoke more about important features of domains. For instance, it seems matter how clear the rules of the domain are at the point that a person enters it. Darwin is a great example. He embarked on his studies at a time when the rules of his domain had just become fuzzy again. Geology had recently expanded European science's understanding of the timeline of the earth; Linnaeus had recently invented his taxonomy of organisms. So, some of the knowledge Darwin needed was in place, but other parts of the domain were wide open.

    The technology of memory is a technology of safety. What are the technologies of freedom?

    Hass read us a funny poem on story telling. The story teller was relating a myth of his people. When his listener questioned an inconsistency in his story, the story teller says, "You know, when I was a child, I used to wonder that..." Later, the listener asked the same question again, and again, and each time the story teller says, "You know, when I was a child, I used to wonder that..." When he was a child, he questioned the stories, but as he grew older -- and presumably wiser -- he came to accept the stories as they were, to retell them without question.

    We continue to tell our stories for their comfort. They make us feel safe.

    They is a danger in safety, as it can blind us to the value of change, can make us fear change. This was one of the moments in which Hass surrendered to a cheap political point, but I began to think about the dangers inherent in the other side of the equation, freedom. What sort of blindness does freedom lead us to?

    Software people and poets have something in common, in the realm of creativity: We both fall in love with patterns, with the interplay between the constant and the variable, with infinite permutation. In computing, we have the variable and the value, the function and the parameter, the framework and the plug-in. We extend and refactor, exposing the constant and the variable in our problem domains.

    Hass repeated an old joke, "Spit straight up and learn something." We laugh, a mockery of people stuck in same old patterns. This hit me right where I live. Yesterday at the closing panel of the Educators' Symposium, Joe Bergin said something that I wrote about a while back: CS educators are an extremely conservative lot. I have something to say about that panel, soon...

    First safety, then freedom -- and with it the power to innovate.

    Of course, extreme danger, pressure, insecurity can also be the necessity that leads to the creative act. As is often the case, opposites turn out to be true. As Thomas Mann said,

    A great truth is a truth whose opposite is also a great truth.

    Hass reminds us that there is agony in creativity -- a pain at stuckness, found in engagement with the world. Pain is unlike pleasure, which is homeostatic ("a beer and ballgame"). Agony is dynamic, ceasing to cling to safe position. There is always an element of anxiety, consciousness heightened at the moment of insight, gestalt in face of incomplete pattern.

    The audience asked a couple of questions:

    • Did he consult only men in his study of creativity? Yes, all but his wife, who is also a poet. She said, "Tell them to have their own strangest thoughts." What a great line.

    • Is creativity unlimited? Limitation is essential to creativity. If our work never hits an obstacle, then we don't know when it's over. (Sounds like test-driven development.) Creativity is always bouncing up against a limit.

    I'll close my report with how Hass closed the main part of his talk. He reached "the whole point of his talk" -- a sonnet by Michelangelo -- and he didn't have it in his notes!! So Hass told us the story in paraphrase:

    The pain is unbearable, paint dripping in my face, I climb down to look at it, and it's horrible, I hate it, I am no painter...

    It was the ceiling of the Sistine Chapel.

    ~~~~~

    UPDATE (10/20/05): Thanks again to Google, I have tracked down the sonnet that Hass wanted to read. I especially love the ending:

    Defend my labor's cause,
    good Giovanni, from all strictures:
    I live in hell and paint its pictures.

    -- Michelangelo Buonarroti

    I have felt this way about a program before. Many times.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    October 17, 2005 9:54 PM

    OOPSLA Day 2: Morning at The Educators' Symposium

    This was my second consecutive year to chair the OOPSLA Educators' Symposium, and my goal was something more down to earth yet similar in flavor: encouraging educators to consider Big Change. Most of our discussions in CS education are about how to do the Same Old Thing better, but I think that we have run the course with incremental improvements to our traditional approaches.

    We opened the day with a demonstration called Agile Apprenticeship in Academia, wherein two professors and several students used a theatrical performance to illustrate a week in a curriculum built almost entirely on software apprenticeship. Dave West and Pam Rostal wanted to have a program for developing software developers, and they didn't think that the traditional CS curriculum could do the job. So they made a Big Change: they tossed the old curriculum and created a four-year studio program in which students, mentors, and faculty work together to create software and, in the process, students learn how to do create software.

    West and Rostal defined a set of 360 competencies that students could satisfy at five different levels. Students qualify to graduate from the program by satisfying each competency at at least the third level (the ability to apply the concept in a novel situation) and some number at higher levels. Students also have to complete the standard general education curriculum of the university.

    Thinking back to yesterday's morning session at Extravagaria, we talked the role of fear and pressure in creativity. West and Rostal put any fear behind them and acted on their dream. Whatever difficulties they face in making this idea work over the long run in a broader setting -- and I believe that the approach faces serious challenges -- at least they have taken a big step forward could make something work. Those of us who don't take any big steps forward are doomed to remain close to where we are.

    I don't have much to say about the paper sessions of the day except that I noticed a recurring theme: New ideas are hard on instructors. I agree, but I do not think that they are hard in the NP-hard sense but rather in the "we've never done it that way before" sense. Unfamiliarity makes things seem hard at first. For example, I think that the biggest adjustment most professors need to make in order to move to the sort of studio approach advocated by West and ROstal is from highly-scripted lectures and controlled instructional episodes to extemporaneous lecturing in response to student needs in real-time. The real hardness in this is that faculty must have a deep, deep understanding of the material they teach -- which requires a level of experience doing that many faculty don't yet have.

    This idea of professors as practitioners, as professionals practiced in the art and science we teach, will return in later entries from this conference...

    Like yesterday's entry, I'll have more to say about today's Educators' Symposium in upcoming entries. I need some time to collect my thoughts and to write. In particular, I'd like to tell you about Ward Cunningham's keynote address and our closing panel on the future of CS education. The panel was especially energizing but troubling at the same time, and I hope to share a sense of both my optimism and my misgivings.

    But with the symposium over, I can now take the rest of the evening to relax, then sleep, have a nice longer run, and return to the first day of OOPSLA proper free to engage ideas with no outside encumbrances.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    October 16, 2005 9:52 PM

    OOPSLA Day 1: The Morning of Extravagaria

    OOPSLA 2005 logo

    OOPSLA has arrived, or perhaps I have arrived at OOPSLA. I almost blew today off, for rest and a run and work in my room. Some wouldn't have blamed me after yesterday, which began at 4:42 AM with a call from Northwest Airlines that my 7:05 AM flight had been cancelled, included my airline pilot missing the runway on his first pass at the San Diego airport, and ended standing in line for two hours to register at my hotel. But I dragged myself out of my room -- in part out of a sense of obligation to having been invited to participate, and in part out of a schoolboy sense of propriety that I really ought to go to the events at my conferences and make good use of my travels.

    My event for the day was an all-day workshop called Extravagaria III: Hunting Creativity. As its title reveals, this workshop was the third in a series of workshops initiated by Richard Gabriel a few years ago. Richard is motivated by the belief that computer science is in the doldrums, that what we are doing now is mostly routine and boring, and that we need a jolt of creativity to take the next Big Step. We need to learn how to write "very large-scale programs", but the way we train computer scientists, especially Ph.D. students and faculty, enforce a remarkable conservatism in problem selection and approach. The Extravagaria workshops aim to explore creativity in the arts and sciences, in an effort to understand better what we mean by creativity and perhaps better "do it" in computer science.

    The workshop started with introductions, as so many do, but I liked the twist that Richard tossed in: each of us was to tell what was the first program we ever wrote out of passion. This would reveal something about each of us to one another, and also perhaps recall the same passion within each storyteller.

    My first thought was of a program I wrote as a high school junior, in a BASIC programming course that was my first exposure to computers and programs. We wrote all the standard introductory programs of the day, but I was enraptured with the idea of writing a program to compute ratings for chessplayers following the Elo system used for chessplayers. This was much more complex than the toy problems I solved in class, requiring input in the form of player ratings and a crosstable showing results of games among the players and output in the form of updated ratings for each player. It also introduced new sorts of issues, such as using text files to save state between runs and -- even more interesting to me -- the generation of an initial set of ratings through a mechanism of successive approximations, process that may never quite converge unless we specified an epsilon larger than 0. I ultimately wrote a program of several hundred lines, a couple of orders of magnitude larger than anything I had written before. And I cared deeply about my program, the problem it solved, and its usefulness to real people.

    I enjoyed everyone else's stories, too. They reminded us all about the varied sources of passion, and how a solving a problem can open our eyes to a new world for us to explore. I was pleased by the diversity of our lot, which included workshop co-organizer John Gribble, a poet friend of Richard's who has never written a program; Rebecca Rikner, the graphic artist who designed the wonderful motif for Richard's book Writers' Workshops and the Work of Making Things, and Guy Steele, one of the best computer scientists around. The rest of us were computer science and software types, including one of my favorite bloggers, Nat Pryce. Richard's first passionate program was perhaps a program to generate "made-up words" from some simple rules, to use in naming his rock-and-roll-band. Guy offered three representative, if not first, programs: a Lisp interpreter written in assembly language, a free verse generator written in APL, and low chart generator written in RPG. This wasn't the last mention of APL today, which is often the sign of a good day.

    Our morning was built around an essay written by John Gribble for the occasion, called "Permission, Pressure, and the Creative Process". John read his essay, while occasionally allowing us in the audience to comment on his remarks or previous comments. John offered as axioms two beliefs that I share with him:

    • that all people are creative, that is, possess the potential to act creatively, and
    • that there is no difference of kind between creativity in the arts and creativity in the sciences.

    What the arts perhaps offer scientists is the history and culture of examining the creative process. We scientists and other analytical folks tend to focus on product, often to the detriment of how well we understand how we create them.

    John quoted Stephen King from his book On Writing, that the creator's job is not to find good ideas but to recognize them when they come along. For me, this idea foreshadows Ward Cunningham's keynote address at tomorrow's Educators' Symposium. Ward will speak on "nurturing the feeble simplicity", on recognizing the seeds of great ideas despite their humility and nurturing them into greatness. As Brian Foote pointed out later in the morning, this sort of connection is what makes conferences like OOPSLA so valuable and fun -- plunk yourself down into an idea-rich environment, soak in good ideas from good minds, and your own mind has the raw material it needs to make connections. That's a big part of creativity!

    John went on to assert that creativity isn't rare, but rather so common that we are oblivious to it. What is rare is for people to act on their inspirations. Why do we not act? We have so low an opinion of our selves that we figure the inspiration isn't good enough or that we can't do it justice in our execution. Another reason: We fear to fail, or to look bad in front of our friends and colleagues. We are self-conscious, and the self gets in the way of the creative act.

    Most people, John believes, need permission to act creatively. Most of us need external permission and approval to act, from friends or colleagues, peers or mentors. This struck an immediate chord with me in three different relationships: student and teacher, child and parent, and spouse and spouse. The discussion in our workshop focused on the need to receive permission, but my immediate thought was of my role as potential giver of permission. My students are creative, but most of them need me to give them permission to create. They are afraid of bad grades and of disappointing me as their instructor; they are self-conscious, as going through adolescence and our school systems tend to make them. My young daughters began life unself-conscious, but so much of their lives are about bumping into boundaries and being told "Don't do that." I suspect that children grow up most creative in an environment where they have permission to create. (Note that this is orthogonal to the issue of discipline or structure; more on that later.) Finally, just as I find myself needing my wife's permission to do and act -- not in the henpecked husband caricature, but in the sense of really caring about what she thinks -- she almost certainly feels the need for *my* permission. I don't know why this sense that I need to be a better giver of permission grew up so strong so quickly today, but it seemed like a revelation. Perhaps I can change my own behavior to help those around me feel like they can create what they want and need to create. I suspect that, in loosing the restrictions I project onto others, I will probably free myself to create, too.

    When author Donald Ritchie is asked how to start writing, he says, "First, pick up your pencil..." He's not being facetious. If you wait for inspiration to begin, then you'll never begin. Inspiration comes to those already involved in the work.

    Creativity can be shaped by constraints. I wrote about this idea six months or so ago in an entry named Patterns as a Source of Freedom. Rebecca suggested that for her at least constraints are essential to creativity, that this is why she opted to be a graphic designer instead of a "fine artist". The framework we operate in can change, across projects or even within a project, but the framework can free us to create. Brian recalled a song by the '80s punk-pop band Devo called Freedom Of Choice:

    freedom of choice is what you got
    then if you got it you don't want it
    seems to be the rule of thumb
    don't be tricked by what you see
    you got two ways to go
    freedom from choice is what you want

    Richard then gave a couple of examples of how some artists don't exercise their choice at the level of creating a product but rather at the level of selecting from lots of products generated less self-consciously. In one, a photographer for National Geographic, put together a pictorial article containing 22 pictures selected from 40,000 photos he snapped. In another, Francis Ford Coppolla shot 250 hours of film in order to create the 2-1/2 hour film Apocalypse Now.

    John then told a wonderful little story about an etymological expedition he took along the trail of ideas from the word "chutzpah", which he adores, to "effrontery", "presumptuous", and finally "presumption" -- to act as if something were true. This is a great way to free oneself to create -- to presume that one can, that one will, that one should. Chutzpah.

    Author William Stafford had a particular angle he took on this idea, what he termed the "path of stealth". He refused to believe in writer's block. He simply lowered his standards. This freed him to write something and, besides, there's always tomorrow to write something better. But as I noted earlier, inspiration comes to those already involved in the work, so writing anything is better than writing nothing.

    As editor John Gould once told Stephen King, "Write with the door closed. Revise with the door open." Write for yourself, with no one looking over your shoulder. Revise for readers, with their understanding in mind.

    Just as permission is crucial to creativity, so is time. We have to "make time", to "find time". But sometimes the work is on its own time, and will come when and at the rate it wants. Creativity demands that we allow enough time for that to happen! (That's true even for the perhaps relatively uncreative act of writing programs for a CS course... You need time, for understanding to happen and take form in code.)

    Just as permission and time are crucial to creativity, John said, so is pressure. I think we all have experienced times when a deadline hanging over our heads seemed to give us the power to create something we would otherwise have procrastinated away. Maybe we need pressure to provide the energy to drive the creative act. This pressure can be external, in the form of a client, boss, or teacher, or internal.

    This is one of the reasons I do not accept late work for a grade in my courses; I believe that most students benefit from that external impetus to act, to stop "thinking about it" and commit to code. Some students wait too long and reach a crossover point: the pressure grows quite high, but time is too short. Life is a series of balancing acts. The play between pressure and time is, I think, fundamental. We need pressure to produce, but we need time to revise. The first draft of a paper, talk, or lecture is rarely as good as it can be. Either I need to give myself to create more and better drafts, or -- which works better for me -- I need to find many opportunities to deliver the work, to create multiple opportunities to create in the small through revision, variation, and natural selection. This is, I think, one of the deep and beautiful truths embedded in extreme programming's cycle "write a test, write code, and refactor".

    Ultimately, a professional learns to rely more on internal pressure, pressure applied by the self for the self, to create. I'm not talking about the censoriousness of self-consciousness, discussed earlier, which tells us that what we produce isn't good enough -- that we should not act, at least in the current product. I'm talking about internal demands that we act, in a particular way or time. Accepting the constraints of a form -- say, the line and syllable restrictions of haiku, or the "no side effects" convention of functional programming style -- puts pressure on us to act in a way, whether it's good or bad. John gave us two other kinds of internal pressure, ones he applies to himself: the need to produce work to share at his two weekly writers' workshops, and the self-discipline of submitting work for publication every month. These pressures involve outside agents, but they are self-imposed, and require us do something we might otherwise not.

    John closed with a short inspiration. Pay attention to your wishes and dreams. They are your mind's way of telling you to do something.

    We spent the rest of the morning chatting as a group on whatever we were thinking after John's talk. Several folks related an experience well-known to any teacher: someone comes to us asking for help with a problem and, in the act of explaining the problem to us they discover the answer for themselves. Students do this with me often. Is the listener essential to this experience, or could we just ask if we were speaking to someone? I suspect that another person is essential for this to work for the learner, both because having a real person to talk to makes us explain things (pressure!) and because the listener can force us to explain the problem more clearly ("I don't understand this yet...")

    A recurring theme of the morning was the essential interactivity of creativity, even when the creator works fundamentally alone. Poets need readers. Programmers need other folks to bounce ideas off of. Learners need someone to talk to, if only to figure things out for themselves. People can be sources of ideas. They can also be reflectors, bouncing our own ideas back at us, perhaps in a different form or perhaps the same, but with permission to act on them. Creativity usually comes in the novel combination of old ideas, not truly novel ideas.

    This morning session was quite rewarding. My notes on the whole workshop are, fittingly, about half over now, but this article has already gotten quite long. So I think I'll save the afternoon sessions for entries to come. These sessions were quite different from the morning, as we created things together and then examined our processes and experiences. They will make fine stand-alone articles that I can write later -- after I break away for a bite at the IBM Eclipse Technology Exchange reception and for some time to create a bit on my own for what should take over my mind for a few hours: tomorrow's Educators' Symposium, which is only twelve hours and one eight- to ten-mile San Diego run away!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    October 13, 2005 6:36 PM

    A Good Day

    Some days, I walk out of the classroom and feel, "Man, I am good." Today was such a day. It wasn't a perfect class, though I've felt that way some days, too. But today everything seemed to go just right: The problem we solved was challenging, and the ideas we needed to solve it flowed naturally. Students asked questions at the right times, which let me address important issues just in time. Unsolicited, students also made comments that added lightness to our work, and comments that indicated they were seeing the beauty in the approach and the code.

    I leave the classroom enough days feeling much less, so this sensation stands out. It's a nice way to end the week before I head off to OOPSLA.

    Now, I have no idea that the students felt the same way leaving class as I did, other than the well-placed questions and comments. For all I know, they left saying "Man, that guy is a @#$%^?." I hope not but, you know, somedays it just doesn't matter.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    October 11, 2005 6:54 PM

    Something New Every Day

    I'm not a Way Old-Timer, but I have been teaching at my current university for thirteen years. In that time, I have seen a lot of odd student programs: wacky bugs, quirky algorithms, outlandish hacks, and just plain funny code. But today I saw a new one...

    One of my colleagues is teaching our intro majors course. In the lab today, he asked students to write some code, including a three-argument Java method to return the minimum of the three. The goal of this exercise was presumably to test the students' ability to consider multiple decisions, to write compound boolean expressions, and most probably to write nested if statements.

    Here is my rendition of one student's extraordinary solution:

        public int min( int x, int y, int z )
        {
            for (int i = Integer.MIN_VALUE; i <= Integer.MAX_VALUE; i++)
                if ( i == x )
                    return x;
                else if ( i == y )
                    return y;
                else if ( i == z )
                    return z;
        }
    

    Sadly, the student didn't do this quite right, as right would be for this approach. The program did not use Integer.MIN_VALUE and Integer.MAX_VALUE to control its loop; it used hardcoded negative and positive 2,000,000,000. As a result, it had to throw in a back-up return statement after the for-loop to handle cases where the absolute of all three numbers were greater than 2,000,000,000. So, the solution loses points for correctness, and a bit of luster on style.

    But still -- wow. No matter what instructors think their students will think, the students can come up with something out of left field.

    I have to admit... In a certain way, I admire this solution. It demonstrates creativity, and even applies a pattern that works well in other contexts. If the student had been asked to write a method to find the smallest value in a three-element unsorted array, then brute force would not only have seemed like a reasonable thing to do; it would have been the only option. Why not try it for ints? (For us XP mavens: Is this algorithm the simplest thing that will work?)

    One answer to the "Why not?" question comes at run time. This method uses 12 seconds or so on average to find its answer. That's almost 12 seconds more than an if-based solution uses. :-)

    Students can bring a smile to an instructors face most every day.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    October 07, 2005 4:34 PM

    Teaching and Administration as Running

    Over the life of this blog, I have used running as a metaphor for software development, for example, in this entry about pace and expectations. But I recently came across running as a metaphor for another part of my professional life: teaching versus administration. This writer compares teaching to sprinting, and administration to marathoning. On first glance, the analogy is attractive.

    A teacher spends hours upon hours preparing for a scant few hours in front of the class, and those hours are high intensity and quite draining. I've rarely in other situations been as tired as I am at the end of a day in which I teach three 75-minute courses.

    An administrator has to save up energy for use throughout a week. A meeting here, a phone call from a parent there, encounters with deans and faculty and university staff and students... Administrators have to pace themselves for a longer haul, as they have to be up and ready to go more frequently over most or all of their time on duty.

    The real test of an analogy's value is in the questions it helps us ask about what we do. So I'll have to think more about this "sprinting versus marathoning" analogy to before I know whether it is a really good one.

    I do know one thing, though. If my administrative duties ever make me feel like this, I will return to my full-time faculty gig faster than my dean can say, "Are you sure?"


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Teaching and Learning

    October 06, 2005 6:48 PM

    More Mathematics, More Coincidence

    When I started writing my recent piece on math requirements for CS degrees, I had intended to go in a different direction than the piece ultimately went. My original plan was to discuss how it is that students who do take courses like algebra in high school still seem so uncomfortable or ill-prepared to do basic arithmetic in their CS courses. But a related thread on the SIGCSE mailing list took my mind, and so the piece, elsewhere. You know what they say... You never know what you will write until you write.

    So I figured I would try to write an article on my original topic next. Again before I could write it, another math coincidence occurred, only this time nearer to my intended topic. I read Tall, Dark, and Mysterious's detailed exploration of the same problem. She starts by citing a survey that found 40% of university professors believe that *most* of their students lack "the basic skills for university-level work", explores several possible causes of the problem, and then discusses in some detail what she believes the problem to be: an emphasis in education these days on content over skill. I think that this accounts for at least part of the problem.

    Whether we overemphasize content at the expense of skill, though, I think that there is another problem at play. Even when we emphasize skill, we don't always require that students master the skills that they learn.

    For many years, I had a short article hanging on my office wall that asked the question: What is the difference between a grade of C and a grade of A in a course? Does it mean that a C student has learned less content than the A student? The same content, but not as deeply? Something else?

    Several popular jokes play off this open question. Do you want your medical doctor to have been a C student? Your lawyer? The general leading your army into battle?

    In my experience as a student and educator, the difference between a C and an A indicates different things depending on the teacher involved and, to a lesser extent, the school involved. But it's not clear to me that even for these teachers and schools the distinction is an intentional one, or that the assigned grades always reflect what is intended.

    Learning theory gives us some idea of how we might assign grades that reflect meaningful distinctions between different levels of student performance. For example, Bloom's taxonomy of educational objectives includes six levels of of learning: knowledge, comprehension, application, analysis, synthesis, and evaluation. These levels give us a way to talk about increasingly more masterful understanding and ability. Folks in education have written a lot in this sphere of discussion which, sadly, I don't know as well as I probably should. Fortunately, some CS educators have begun to write articles applying the idea to computer science curricula. We are certainly better off if we are thinking explicitly about what satisfactory and unsatisfactory performance means in our courses, and in our curricula overall.

    I heard about an interesting approach to this issue this morning at a meeting of my college's department heads. We were discussing the shortcomings of a course-by-course approach to assessing the learning outcomes of our university's liberal arts core, which purports by cumulative effect to create well-rounded, well-educated thinkers. One head described an assessment method she had read about in which the burden was shifted to the student. In this method, each student was asked to offer evidence that they had achieved the goal of being a well-rounded thinker. In effect, the student was required to "prove" that there were, in fact, educated. If we think in terms of the Bloom taxonomy discussed above, each student would have to offer evidence of that they had reached each of the six cognitive levels of maturity. Demonstrating knowledge might be straightforward, but what of comprehension, application, analysis, synthesis, and evaluation? Students could assemble a portfolio of projects, the development of which required them to comprehend, apply, analyze, synthesize, and evaluate.

    This reminded me very much of how my architecture friends had to demonstrate their readiness to proceed to the next level of the program, and ultimately to graduate: through a series of juried competitions. These projects and their juried evaluation fell outside the confines of any particular course. I think that this would be a marvelous way for computer science students, at least the ones focused on software development as a career path, to demonstrate their proficiency. I have been able to implement the idea only in individual courses, senior-level project courses required of all our majors. The result has been some spectacular projects in the intelligent systems area, including one I've written about before. I've also seen evidence that some of our seniors manage to graduate without having achieved a level of proficiency I consider appropriate. As one of their instructors, I'm at least partly responsible for that.

    This explains why I am so excited about one of the sessions to be offered as a part of the upcoming Educators Symposium at OOPSLA 2005. The session is called Apprenticeship Agility in Academia. Dave West and Pam Rostal, formerly of New Mexico Highlands University, will demonstrate "a typical development iteration as practiced by the apprentices of the NMHU Software Development Apprenticeship Program". This apprenticeship program was an attempt to build a whole software development curriculum on the idea that students advance through successive levels of knowledge and skill mastery. Students were able to join projects based on competencies already achieved and the desire to achieve further competencies offered by the projects. Dave and his folks enumerated all of the competencies that students must achieve prior to graduation, and students were then advanced in the program as they achieved them at successive levels of mastery. It solves the whole "C grade versus A grade" problem by ignoring it, focusing instead on what they really wanted students to achieve.

    Unfortunately, I have to use the past tense when describing this program, because NMHU canceled the program -- apparently for reasons extraneous to the quality or performance of the program. But I am excited that someone had the gumption and an opportunity to try this approach in practice. I'd love to see more schools and more CS educators try to integrate such ideas into their programs and courses. (That's one of the advantages of chairing an event like the OOPSLA Educators Symposium... I have some ability to shape or direct the discussion that educators have at the conference.)

    For you students out there: To what extent do you strive for mastery of the skills you learn in your courses? When do you settle for less, and why? What can you do differently to help yourselves become effective thinkers and practitioners?

    For you faculty out there: To what extent do you focus your teaching and grading efforts on student mastery of skills? What techniques work best, and why? When doyou settle for less, and why? I myself know that I settle for less all too often, sometimes due to the impediments placed in my way by our curriculum and university structure, but sometimes due to my own lack of understanding or even laziness. Staying in an intellectual arena in which I can learn from others such as West and Rostal is one way I encourage myself to think Big Thoughts and try to do better.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    September 29, 2005 1:49 PM

    Mathematics Coincidence

    An interesting coincidence... Soon after I post my spiel on preparing to study computer science, especially the role played by mathematics courses, the folks on the SIGCSE mailing list have started a busy thread on the place of required math courses in the CS curriculum. The thread began with a discussion of differential equations, which some schools apparently still require for a CS degree. The folks defending such math requirements have relied on two kinds of argument.

    One is to assert that math courses teach discipline and problem-solving skills, which all CS students need. I discussed this idea in my previous article. I don't think there is much evidence that taking math courses teaches students problem-solving skills or discipline, at least not as most courses are taught. They do tend to select for problem-solving skills and discipline, though, which makes them handy as a filter -- if that's what you want. But they are not always helpful as learning experiences for students.

    The other is to argue that students may find themselves working on scientific or engineering projects that require solving differential equations, so the course is valuable for its content. My favorite rebuttal to this argument came from a poster who listed a dozen or so projects that he had worked on in industry over the years. Each required specific skills from a domain outside computing. Should we then require one or more courses from each of those domains, on the chance that our students work on projects in them? Could we?

    Of course we couldn't. Computing is a universal tool, so it can and usually will be applied everywhere. It is something of a chameleon, quickly adaptable to the information-processing needs of a new discipline. We cannot anticipate all the possible applications of computing that our students might encounter any more than we can anticipate all the possible applications of mathematics they might encounter.

    The key is to return to the idea that underlies the first defense of math courses, that they teach skills for solving problems. Our students do need to develop such skills. But even if students could develop such skills in math courses, why shouldn't we teach them in computing courses? Our discipline requires a particular blend of analysis and synthesis and offers a particular medium for expressing and experimenting with ideas. Computer science is all about describing what can be systematically described and how to do so in the face of competing forces. The whole point of an education in computing should be to help people learn how to use the medium effectively.

    Finally, Lynn Andrea Stein pointed out an important consideration in deciding what courses to require. Most of my discussion and the discussion on the SIGCSE mailing list has focused on the benefits of requiring, say, a differential equations course. But we need also to consider the cost of such a requirement. We have already encountered one: an opportunity cost in the form of people. Certain courses filter out students who are unable to succeed in that course, and we need to be sure that we are not missing out on students who would make good computer science student. For example, I do not think that a student's inability to succeed in differential equations means that the student cannot succeed in computer science. A second opportunity cost comes in the form of instructional time. Our programs can require only so many courses, so many hours of instruction. Could we better spend a course's worth of time in computing on a topic other than differential equations? I think so.

    I remember learning about opportunity cost, in an economics course I took as an undergrad. Taking a broad set of courses outside of computing really can be useful.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    September 27, 2005 7:10 PM

    Preparing to Study Computer Science

    Yesterday, our department hosted a "preview day" for high school seniors who are considering majoring in computer science here at UNI. During the question-and-answer portion of one session, a student asked, "What courses should we take in our senior year to best prepare to study CS?" That's a good question, and one that resulted in a discussion among the CS faculty present.

    For most computer science faculty, the almost reflexive answer to this question is math and science. Mathematics courses encourage abstract thinking, attention to detail, and precision. Science courses help think like an empiricist: formulating hypotheses, designing experiments, making and recording observations, and drawing inferences. A computer science student will use all of these skills throughout her career.

    I began my answer with math and science, but the other faculty in the room reacted in a way that let me know they had something to say, too. So I let them take the reins. All three downplayed the popular notion that math, at least advanced math, is an essential element of the CS student's background.

    One faculty member pointed out the students with backgrounds in music often do very well in CS. This follows closely with the commonly-held view that music helps children to develop skill at spatial and symbolic reasoning tasks. Much of computing deals not with arithmetic reasoning but with symbolic reasoning. As an old AI guy, I know this all too well. In much the same way that music might help CS students, studying language may help students to develop facility manipulating ideas and symbolic representations, skills that are invaluable to the software developer and the computing researcher alike.

    We ended up closing our answer to the group by saying that studying whatever interests you deeply -- and really learning that discipline -- will help you prepare to study computer science more than following any blanket prescription to study a particular discipline.

    (In retrospect, I wish I had thought to tack on one suggestion to our conclusion: There can be great value in choosing to study something that challenges you, that doesn't interest you as much as everything else, precisely because it forces you to grow. And besides, you may find that you come to understand the something well enough to appreciate it, maybe even like it!)

    I certainly can't quibble with the direction our answer went. I have long enjoyed learning from writers, and I believe that my study of language and literature, however narrow, has made me a better computer scientist. I have had many CS students with strong backgrounds in art and music, including one wrote about last year. Studying disciplines other than math and science can lay a suitable foundation for studying computer science.

    I was surprised by the strength of the other faculty's reaction to the notion that studying math is among the best ways to prepare for CS. One of these folks was once a high school math teacher, and he has always expressed dissatisfaction with mathematics pedagogy in the US at both the K-12 and university levels. To him, math teaching is mostly memorize-and-drill, with little or no explicit effort put into developing higher-order thinking skills for doing math. Students develop these skills implicitly, if at all, through sheer dint of repetition. In his mind, the best that math courses can do for CS is to filter out folks who have not yet developed higher-order thinking skills; it won't help students develop them.

    That may well be true, though I know that many math teachers and math education researchers are trying to do more. But, while students may not need advanced math courses to succeed in CS -- at least not in many areas of software development -- they do need to master some basic arithmetical skills. I keep thinking back to a relatively straightforward programming assignment I've given in my CS II course, to implement Nick Parlante's NameSurfer nifty assignment in Java. A typical NameSurfer display looks like this image, from Nick's web page:

    As the user resizes the window, the program should grow or shrink its graph accordingly. To draw this image, the student must do some basic arithmetic to lay out the decade lines and to place the points on the lines and the names in the background. To scale the image, the student must do this arithmetic relative to window size, not with fixed values.

    Easy, right? When I assigned this program, many students reacted as if I had cut off one of their fingers. Others seemed incapable of constructing the equations needed to do scaling correctly. (And you should have the reaction students had when once, many years ago, I asked students to write a graphical Java version of Mike Clancy's delicious Cat And Mouse nifty assignment. Horror of horror -- polar coordinates!)

    This isn't advanced math. This is algebra. All students in our program were required to pass second-year algebra before being admitted to our university. But passing a course does not require mastery, and students find themselves with a course on their transcript but not the skills that the course entails.

    Clearly, mastery of basic arithmetic skills is essential to most of computer science, even if more advanced math, even calculus, are not essential. Especially when I think of algebraic reasoning more abstractly, I have hard time imagining how students can go very far in CS without mastering algebraic reasoning. Whatever its other strengths or weaknesses the How to Design Programs approach to teaching programming does one thing well, and that is to make an explicit connection between algebraic reasoning and programs. The result is something in the spirit of Polya's How to Solve It.

    This brings us back to what is the weakest part of the "math and science" answer to our brave high school student's question. So much of computing is not theory or analysis but design -- the act of working out the form of a program, interface, or system. While we may talk about the "design" of a proof or scientific experiment, we mean something more complex when we talk about the design of software. As a result, math and science do relatively little to help students develop the design skills which will be so essential to succeeding in the software side of CS.

    Studying other disciplines can help, though. Art, music, and writing all involve the students in creating things, making them think about how to make. And courses in those disciplines are more likely to talk explicitly about structure, form, and design than are math and science.

    So, we have quite defensible reasons to tell students to study disciplines other than science and math. I would still temper my advice by suggesting that students study both math and science and literature, music, art, and other creative disciplines. While this may not be what our student was looking for, perhaps the best answer is all of the above.

    Then again, maybe success in computing is controlled by aptitude, than by learning. If that is the case, then many of us, students and faculty alike, are wasting a lot of time. But I don't think that's true for most folks. Like Alan Kay, I think we just need to understand better this new medium that is computing so that we can find the right ways to empower as many people as possible to create new kinds of artifact.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    September 21, 2005 8:22 PM

    Two Snippets, Unrelated?

    First... A student related this to me today:

    But after your lecture on mutual recursion yesterday, [another student] commented to me, "Is it wrong to think code is beautiful? Because that's beautiful."

    It certainly isn't wrong to think code is beautiful. Code can be beautiful. Read McCarthy's original Lisp interpreter, written in Lisp itself. Study Knuth's TeX program, or Wirth's Pascal compiler. Live inside a Smalltalk image for a while.

    I love to discover beautiful code. It can be professional code or amateur, open source or closed. I've even seen many beautiful programs written by students, including my own. Sometimes a strong student delivers something beautiful as expected. Sometimes, a student surprises me by writing a beautiful program seemingly beyond his or her means.

    The best programmers strive to write beautiful code. Don't settle for less.

    (What is mutual recursion, you ask? It is a technique used to process mutually-inductive data types. See my paper Roundabout if you'd like to read more.)

    The student who told me the quote above followed with:

    That says something about the kind of students I'm associating with.

    ... and about the kind of students I have in class. Working as an academic has its advantages.

    Second... While catching up on some blog reading this afternoon, I spent some time at Pragmatic Andy's blog. One of his essays was called What happens when t approaches 0?, where t is the time it takes to write a new application. Andy claims that this is the inevitable trend of our discipline and wonders how it will change the craft of writing software.

    I immediately thought of one answer, one of those unforgettable Kent Beck one-liners. On a panel at OOPSLA 1997 in Atlanta, Kent said:

    As speed of development approaches infinity, reusability becomes irrelevant.

    If you can create a new application in no time flat, you would never worry about reusing yesterday's code!

    ----

    Is there a connection between these two snippets? Because I am teaching a course in programming languages course this semester, and particularly a unit on functional programming right now, these snippets both call to mind the beauty in Scheme.

    You may not be able to write networking software or graphical user interfaces using standard Scheme "out of the box", but you can capture some elegant patterns in only a few lines of Scheme code. And, because you can express rather complex computations in only a few lines of code, the speed of development in Scheme or any similarly powerful language approaches infinity much faster than does development in Java or C or Ada.

    I do enjoy being able to surround myself with the possibility of beauty and infinity each day.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    September 15, 2005 8:10 PM

    Technology and People in a Flat World

    Technology based on the digital computer and networking has radically changed the world. In fact, it has changed what is possible in such a way that how we do business and entertain ourselves in the future may bear little resemblance to what we do today.

    This will surely come as no surprise to those of you reading this blog. Blogging itself is one manifestation of this radical change, and many bloggers devote much of their blogging to discussing how blogging has changed the world (ad nauseam, it sometimes seems). But even without blogs, we all know that computing has redefined the parameters within each information is created and shared, and defined a new medium of expression that we and the computer-using world have only begun to understand.

    Thomas Friedman

    Last night, I had the opportunity to hear Thomas Friedman, Pulitzer Prize-winning international affairs columnist for the New York Times, speak on the material in his bestseller, The World Is Flat: A Brief History of the Twenty-First Century. Friedman's book tells a popular tale of how computers and networks have made physical distance increasingly irrelevant in today's world.

    Two caveats up front. The first is simple enough: I have not read the book The The World Is Flat yet, so my comments here will refer only to the talk Friedman delivered here last night. I am excited by the ideas and would like to think and write about them while they are fresh in my mind.

    The second caveat is a bit touchier. I know that Friedman is a political writer and, as such carries with him the baggage that comes from at least occasionally advocating a political position in his writing. I have friends who are big fans of his work, and I have friends who are not fans at all. To be honest, I don't know much about his political stance beyond my friends' gross characterizations of him. I do know that he has engendered strong feelings on both sides of the political spectrum. (At least one of his detractors has taken the time to create the Anti-Thomas Friedman Page -- more on that later.) I have steadfastly avoided discussing political issues in this blog, preferring to focus on technical issues, with occasional drift into the cultural effects of technology. This entry will not be an exception. Here, I will limit my comments to the story behind the writing of the book and to the technological arguments made by Friedman.

    On a personal note, I learned that, like me, Friedman is from the American Midwest. He was born in Minneapolis, married a girl from Marshalltown, Iowa, and wrote his first op-ed piece for the Des Moines Register.

    The idea to write The The World Is Flat resulted as a side effect of research Friedman was doing on another project, a documentary on off-shoring. He was interviewing Narayana Murthy, chairman of the board at Infosys, "the Microsoft of India", when Murthy said, "The global economic playing field is being leveled -- and you Americans are not ready for it." Friedman felt as if he had been sideswiped, because he considers himself well-studied in modern economics and politics, and he didn't know what Murthy meant by "the global economic playing field is being leveled" or how we Americans were so glaringly unprepared.

    As writers often do, Friedman set out to write a book on the topic in order to learn it. He studied Bangalore, the renown center of the off-shored American computing industry. Then he studies Dalien, China, the Bangalore of Japan. Until last night, I didn't even know such a place existed. Dalyen plays the familiar role. It is a city of over a million people, many of whom speak Japanese and whose children are now required to learn Japanese in school. They operate call centers, manage supply chains, and write software for japanese companies -- all jobs that used to be done in Japan by Japanese.

    Clearly the phenomenon of off-shoring is not US-centric. Other economies are vulnerable. What is the dynamic at play?

    Friedman argues that we are in a third era of globalization. The first, which he kitschily calls Globalization 1.0, ran from roughly 1492, roughly when Europe began its imperial expansion across the globe, to the early 1800s. In this era, the agent of globalization was the country. Countries expanded their land holdings and grew their economies by reaching overseas. The second era ran from the early 1800s until roughly 2000. (Friedman chose this number as a literary device, I think... 1989 or 1995 would have made better symbolic endpoints.) In this era, the corporation was the primary agent of globalization. Companies such as the British East India Company reached around the globe to do commerce, carrying with them culture and politics and customs.

    We are now in Friedman's third era, Globalization 3.0. Now, the agent of change is the individual. Technology has empowered individual persons to reach across national and continental boundaries, to interact with people of other nationalities, cultures, and faiths, and to perform commercial, cultural, and creative transactions independent of their employers or nations.

    Blogging is, again, a great example of this phenomenon. My blog offers me a way to create a "brand identity" independent of any organization. (Hosting it at www.eugenewallingford.com would sharpen the distinction!) I am able to engage in intellectual and even commercial discourse with folks around the world in much the same way I do with my colleagues here at the university. In the last hour, my blog has been accessed by readers in Europe, Australia, South America, Canada, and in all corners of the United States. Writers have always had this opportunity, but at glacial rates of exchange. Now, anyone with a public library card can blog to the world.

    Technology -- specifically networking and the digital computer -- has made Globalization 3.0 possible. Friedman breaks our current era into a sequence of phases characterized by particular advances or realizations. The specifics of his history of technology are sometimes arbitrary, but at the coarsest level he is mostly on the mark:

    1. 11/09/89 - The Berlin Wall falls, symbolically opening the the door for the East and West to communicate. Within a few months, Windows 3.0 ships, and the new accessibility of the personal computer made it possible for all of us to be "digital authors".

    2. 08/09/95 - Netscape went public. The investment craze of its IPO presaged the dot-com boom, and the resultant investment in network technology companies supplied the capital that wired the world, connecting everyone to everyone else.

    3. mid 1990s - The technology world began to move toward standards for data interchange and software connectivity. This standards movement resulted in what Friedman calls a "collaboration platform", on which new ways of working together can be built.

    These three phases have been followed in rapid succession by a number of faster-moving realizations on top of the collaboration platform:

    1. outsourcing tasks from one company to another

    2. offshoring tasks from one country to another

    3. uploading of digital content by individuals

    4. supply chaining to maximize the value of offshoring and outsourcing by carefully managing the flow of goods and services at the unit level

    5. insourcing of other companies into public interface of a company's commercial transactions

    6. informing oneself via search of global networks

    7. mobility (my term) of data and means of communication

    Uploading is the phase where blogs entered the picture. But there is so much more. Before blogs came open source software, in which individual programmers can change their software platform -- and share their advances with others but uploading code into a common repository. And before open source became popular we had the web itself. If Mark Rupert objects to what he considers Thomas Friedman's "repeated ridicule" of those opposed to globalization, then he can create a web page to make his case. Early on, academics had an edge in creating web content, but the advance of computing hardware and now software has made it possible for anyone to publish content. The blogging culture has even increased the opportunity to participate in wider debate more easily (though, as discussions of the "long tail" have shown, that effect may be dying off as the space of blogs grows beyond what is manageable by a single person).

    Friedman's description of insourcing sounded a lot like outsourcing to me, so I may need to read his book to fully get it. He used UPS and FedEx as examples of companies that do outsourced work for other corporations, but whose reach extends deeper into the core functions of the outsourcing company, intermingling in a way that sometimes makes the two companies' identities indistinguishable to the outside viewer.

    The quintessential example of informing is, of course, Google, which has made more information more immediately accessible to more people than any entity in history. It seems inevitable that, with time, more and more content will become available on-line. The interesting technical question is how to search effectively in databases that are so large and heterogeneous. Friedman explains well to his mostly non-technical audience that we are at just the beginning of our understanding of search. Google isn't the only player in this field, obviously, as Yahoo!, Microsoft, and a host of other research groups are exploring this problem space. I hold out belief that techniques from artificial intelligence will play an increasing role in this domain. If you are interested in Internet search, I suggest that you read Jeremy Zawodny's blog.

    Friedman did not have a good name for the most recent realization atop his collaboration platform, referring to it as all of the above "on steroids". To me, we are in the stage of realizing the mobility and pervasiveness of digital data and devices. Cell phones are everywhere, and usually in use by the holder. Do university students ever hang up? (What a quaint anachronism that is...) Add to this numerous other technologies such as wireless networks, voice over internet, bluetooth devices, ... and you have a time in which people are never without access to their data or their collaborators. Cyberspace isn't "out there" any more. It is wherever you are.

    These seven stages of collaboration have, in Friedman's view, engendered a global communication convergence, at the nexus of which commerce, economics, education, and governance have been revolutionized. This convergence is really an ongoing conversion of an old paradigm into a new one. Equally important are two other convergences in process. One he calls "horizontaling ourselves", in which individuals stop thinking in terms of what they create and start thinking in terms of who they collaborate with, of what ideas they connect to. The other is the one that ought to scare us Westerners who have grown comfortable in our economic hegemony: the opening of India, China, and the former Soviet Union, and 3 billion new players walking onto a level economic playing field.

    Even if we adapt to all of the changes wrought by our own technologies and become well-suited to compete in the new marketplace, the shear numbers of our competitors will increase so significantly that the market will be a much starker place.

    Friedman told a little story later in the evening that illustrates this point quite nicely. I think he attributed the anecdote to Bill Gates. Thirty years ago, would you prefer to have been born a B student in Poughkeepsie, or a genius in Beijing or Bangalore? Easy: a B student in Poughkeepsie. Your opportunities were immensely wider and more promising. Today? Forget it. The soft B student from Poughkeepsie will be eaten alive by a bright and industrious Indian or Chinese entrepreneur.

    Or, in other words from Friedman, remember: In China, if you are "1 in a million", then there are 1300 people just like you.

    All of these changes will take time, as we build the physical and human infrastructure we need to capitalize fully on new opportunities. The same thing happened when we discovered electricity. The same thing happened when Gutenberg invented the printing press. But change will happen faster now, in large part due to the power of the very technology we are harnessing, computing.

    Gutenberg and the printing press. Compared to the computing revolution. Where have we heard this before? Alan Kay has been saying much the same thing, though mostly to a technical audience, for over 30 years! I was saddened to think that nearly everyone in the audience last night thinks that Friedman is the first person to tell this story, but gladdened that maybe now more people will understand the momentous weight of the change that the world is undergoing as we live. Intellectual middlemen such as Friedman still have a valuable role to play in this world.

    As Carly Fiorina (who was recently Alan's boss at Hewlett-Packard before both were let go in a mind-numbing purge) said, "The 'IT revolution' was only a warm-up act." Who was it that said, "The computer revolution hasn't happened yet."?

    The question-and-answer session that followed Friedman's talk produced a couple of good stories, most of which strayed into policy and politics. One dealt with a topic close to this blog's purpose, teaching and learning. As you might imagine, Friedman strongly suggests education as an essential part of preparing to compete in a flat world, in particular the ability to "learn how to learn" He told us of a recent speaking engagement at which an ambitious 9th grader asked him, "Okay, great. What class do I take to learn how to learn?" His answer may be incomplete, but it was very good advice indeed: Ask all your friends who the best teachers are, and then take their courses -- whatever they teach. It really doesn't matter the content of the course; what matters is to work with teachers who love their material, who love to teach, who themselves love to learn.

    As a teacher, I think one of the highest forms of praise I can get from a student is to be told that they want to take whatever course I am teaching the next semester. It may not be in their area of concentration, or in the hot topic du jour, but they want to learn with me. When a student tells me this -- however rare that may be -- I know that I have communicated something of my love for knowledge and learning and mastery to at least one student. And I know that the student will gain just as much in my course as they would have in Buzzword 401.

    We in science, engineering, and technology may benefit from Friedman's book reaching such a wide audience. He encourages a focus not merely on education but specifically on education in engineering and the sciences. Any American who has done a Ph.D. in computer science knows that CS graduate students in this country are largely from India and the Far East. These folks are bright, industrious, interesting people, many of whom are now choosing to return to their home countries upon completion of their degrees. They become part of the technical cadre that helps to develop competitors in the flat world.

    As I listened last night, Chad Fowler's new book My Job Went to India came to mind. This is another book I haven't read yet, but I've read a lot about it on the web. My impression is that Chad looks at off-shoring not as a reason to whine about bad fortune but as an opportunity to recognize our need to improve our skills for participating in today's marketplace. We need to sharpen our technical skills but also develop our communication skills, the soft skills that enable and facilitate collaboration at a level higher than uploading a patch to our favorite open source project. Friedman, too, looks at the other side of off-shoring, to the folks in Bangalore who are working hard to become valuable contributors in a world redefined by technology. It may be easy to blame American CEOs for greed, but that ignores the fact that the world is changing right before us. It also does nothing to solve the problem.

    All in all, I found Friedman to be an engaging speaker who gave a well-crafted talk full of entertaining stories but with substance throughout. I can't recommend his book yet, but I can recommend that you go to hear him speak if you have the opportunity.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    September 01, 2005 10:44 PM

    Back to Scheme in the Classroom

    In addition to my department head duties, I am also teaching one of my favorite courses this semester, Programming Languages and Paradigms. The first part of the course introduces students to functional programming in Scheme.

    I've noticed an interesting shift in student mentality about programming languages since I first taught this course eight or ten years ago. There are always students whose interest is on learning only those languages that they will be of immediate use to them in their professional careers. For them, Scheme seems at best a distraction and at worst a waste of time. I feel sorry for such folks, because they miss out on a lot of beauty with their eyes so narrowly focused on careers. Even worse, they miss out on a chance to learn ideas that may well show up in the languages they will find themselves using ... such as Python and Ruby.

    But increasingly I encounter students who are much more receptive to what Scheme can teach them. It seems that this shift has paralleled the rise of scripting languages. As students have come to use Perl, Python, and PHP for web pages and for hacking around with Linux, they have come to see the power of what they call "sloppy" languages -- languages that don't have a lot of overhead, especially as regards static typing, that let them make crazy mistakes, but that also empower them to say and do a lot in a only a few lines of code. After their experiences in our introductory courses, where students learn Java and Ada, students feel as if freed when using Perl, Python, and PHP -- and, yes, sometimes even Scheme. Now, Scheme is hardly a "sloppy" language in the true sense of the word, and its strange parenthetical syntax is unforgiving. But it gives them power to say things that were inconvenient or darn near impossible in Java or Ada, in very little code. Students also come to appreciate Mumps, which they learn in our Information Storage and Retrieval course, for much the same reason.

    I'm looking forward to the rest of this course, both for its Scheme and for its programming languages content. What great material for a computer science student to learn. With any luck, they come to know that languages and compilers aren't magic, though sometimes they seem to do magic. But we computer scientists know the incantations that make them dance.

    Speaking of parentheses... Teaching Scheme again reminds me of just how compact Scheme's list notation can be. In class today, we discussed lists, quotation, and the common form of programs and data. Quoted lists of symbols carry all of the structural information we need to reason about many kinds of data. Contrast that with the verbosity of, oh, say, XML. (Do you remember this post from the past?) Just last month Brian Marick spoke a Lisp lover's lament on an agile testing mailing list. Someone had said:

    Note to Brian: Explain that XML is not the be-all and end-all of manual data formats. Then explain it's the easiest place to start, modulo your tool quality.

    To which Brian replied:

    I'll use XML instead of YAML as my concession to reality and penance for not talking about Perl. It will be hard not to launch into my old Lisp programmer's rant about how all the people who thought writing programs in tree structures with parentheses was unreadable now think writing data in tree structures with angle brackets and keyword arguments and quotes is somehow just the cat's pajamas.

    ( YAML is a lightweight, easy to read mark-up language for use with scripting languages. Brian might also bow to reality and not use LAML, a Lispish mark-up language for the web implemented in Scheme.)

    Ceci n'est pas une pipe

    Speaking of XML ... Here is my favorite bit of Angle Bracket Speak from the last month or so, courtesy of James Tauber:

    <pipe>Ceci n'est pas une pipe</pipe>

    So that's how I'm feeling about my course right now. Any more, when someone asks me how my class went today, I feel like the guy answering in this cartoon:

    Was it good for you? Read my blog.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    August 22, 2005 7:55 PM

    Negative Splits in Learning

    When I was a in school, I was a pretty good student. The first class I ever remember not "getting it" immediately was Assembler 2, at the beginning of my sophomore year in college. The feeling was a bit scary.

    Our first assembler course was the "traditional" assembler course, in which we learned basic computer organization and the magic of MOV, JNE, and LD commands. But this was the early 1980s, and my school required a second course in this area. In the second quarter, we did more assembly language programming, but we also learned JCL and how to control an IBM 360/370 at a fine level of granularity from the moment execution began. Our card decks and key punches and assembly language programming became a bit more complicated.

    For whatever reason, the different levels of abstraction in the 360/370, when combined with the massive gulf between the problems we were programming and assembly language, left my head spinning. I didn't get it. Good student that I was, I kept plugging away, doing what we were taught in class, getting mostly correct answers -- but not really understanding why. And when I did understand something, it did not feel natural. and scored well. But I was worried, because I didn't get it. I didn't feel like I was in control.

    Then one morning, everything changed. There was no great epiphany, no flash of light. I was simply standing in the shower when I realized, ever so naturally, that it all made sense. I walked to class, and all was again right with the world.

    Since that time, I have had this sort of experience a few more times. The learning process moves slowly for a while, with the effort exerted exceeding the gains in understanding. Then, there seems to be an acceleration in understanding, like compound interest on the effort exerted earlier in the process. Once I got used to the idea that this would happen, it wasn't so threatening to me, and I was able to proceed with relative confidence deep into new ideas that I wasn't getting -- yet. But there is always a nagging thought at the back of my mind that this is it -- I've finally reached a point I won't be able to get.

    Runners have an idea that bears some resemblance to this upside-down learning process, called negative splits. The simplest form of negative splits is when the first half of a run is slower than the second half, but the idea generalizes to any number of splits. For the mathematically inclined among you, think of a monotonically decreasing sequence of split times.

    In running, negative splits are generally considered a good thing. They are wise as a training strategy, as they ensure that you do not use up all of your energy too soon. Plus, they cause you to run faster at the end of the workout, which trains your body in the feeling of working hard when it is tired. They are often a wise racing strategy, too, for many of the same reasons. Many racing greats have set records running negative splits.

    As I have learned over time, there is a corresponding danger -- going too slow in the beginning. If I am trying to get better, I need to be careful not to create negative splits by virtue of not working hard enough early on. In a race, this can waste the opportunity to reach a goal. But for endurance training, it's hard to go too wrong with negative splits. The goal is to do the miles, and negative splits maximizes the chance of completing the miles successfully.

    Recently I wrote about my happiness with some long distance workouts. You will now notice that both my 20-miler and my 22-miler were characterized by negative splits. Much of my happiness comes not so much from running fast as from running faster as those long runs progressed -- in some cases, with eight or more miles increasingly faster than the previous.

    I've come to realize that negative splits in learning, of the sort I experienced in that second assembler course, are also often a good thing. Learning requires our minds to change, and that change often takes time. Changing habits, practices, and expectations is hard, because our minds are well-suited to remembering and employing patterns of thought as a strength. Some of us even have a harder time than others changing mental habits. I am one of those people.

    Runners use negative splits as an explicit strategy, but for the learner change often forces us to accept negative splits. They are simply cognitive and psychological reality. Coming to accept that, and treating negative splits as the way we sometimes must learn, can free us from a lot of fear and uncertainty. And surrendering that fear and uncertainty can help us learn even better.

    Students should keep this in mind. (And remember, we are all students.) Teachers should keep this in mind, too. We need to take negative splits into account for many or all of our students, especially with the most challenging or abstract material. This is one of the elements of teaching that calls for a little cheerleading, a little reassurance to students that they should hang in there with the tough stuff, with the promise of it all coming together sometime soon.

    Leaders of every sort -- even department heads -- need to keep this principle in mind, too. Introducing change to an organization has all the hallmarks of trying to learn new habits and practices. People feel fear and uncertainty. This is complicated by the fact that sometimes the change must come to personal interactions, which carry a lot of extra psychological baggage with them. Negative splits may be a wise strategy for the leader, introducing small, easier-to-master changes first and only accelerating when people gain confidence and realize their own strength.

    That old assembler course... I had one of my favorite CS professors ever in that course, one who taught me so much about systems, software, and professionalism. I still have the textbook on my bookshelf, too: System 360/370: Job Control Language and the Access Methods by Reino Hannula. Early in that semester, the name "Hannula" raised fear in students' hearts, was veritable poison to the ears. When we all got it, it became a beloved totem of our collective experience. And the greatest lesson we learned in the course probably wasn't IBM JCL -- as cool as it was -- but the idea of negative splits.


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Running, Teaching and Learning

    August 09, 2005 4:00 PM

    The Academic Future of Agile Methods

    After a week impersonating one of the Two Men and a Truck guys, I'm finally back to reading a bit. Brian Marick wrote several articles in the last week that caught my attention, and I'd like to comment on two now.

    The first talked about why Brian thinks the agile movement in software development is akin to the British cybernetics movement that began in the late 1940's. He points out three key similarities in the two:

    • a preference for performance (e.g., writing software) over representation (e.g., writing requirements documents)
    • constant adaptation to the environment as a means for achieving whole function
    • a fondness for surprises that arise when we build and play with complex artifacts

    I don't know much about British cybernetics, but I'm intrigued by the connections that Brian has drawn, especially when he says, "What I wanted to talk about is the fact that cybernetics fizzled. If we share its approaches, might we also share its fatal flaws?"

    My interest in this seemingly abstract connection would not surprise my Ph.D. advisor or any of the folks who knew me back in my grad school days -- especially my favorite philosophy professor. My research was in the area of knowledge-based systems, which naturally took me into the related areas of cognitive psychology and epistemology. My work with Dr. Hall led me to the American pragmatists -- primarily C. S. Peirce, William James, and John Dewey. I argued that the epistemology of the pragmatists, driven as it was by the instrumental value of knowledge for changing how we behave in particular contexts, was the most appropriate model for AI scientists to build upon, rather than the mathematical logic that dominates most of AI. My doctoral work on reasoning about legal arguments drew heavily on the pragmatic logic of Stephen Toulmin (whose book The Uses of Argument I strongly recommend, by the way).

    My interest in the connection between AI and pragmatic epistemology grew from a class paper into a proposed chapter in my dissertation. For a variety of reasons the chapter never made it into my dissertation, but my interest remains strong. While going through files as a part of my move last week, I came across my folder of drafts and notes on this. I would love to make time to write this up in a more complete form...

    Brian's second article gave up -- only temporarily, I hope -- on discussing how flaws in the agile movement threaten its advancement, but he did offer two suggestions for how agile folks might better ensure the long-term survival and effect o their work: produce a seminal undergraduate-level textbook and "take over a computer science department". Just how would these accomplish the goal?

    It's hard to overestimate the value of a great textbook, especially the one that reshapes how folks think about an area. I've written often about the ebbs and flows of the first course in CS and, while much of the history of CS1 can be told by tracing the changes in programming language used, perhaps more can be told by tracing the textbooks that changed CS1. I can think of several off-hand, most notably Dan McCracken's Fortran IV text and Nell Dale's Pascal text. The C++ hegemony in CS 1 didn't last long, and that may be due to the fact that no C++-based book ever caught fire with everyone. I think Rick Mercer's Computing Fundamentals with C++ made it possible for a lot of instructors and schools to teach a "soft" object-oriented form of OOP in C++. Personally, I don't think we have seen the great Java-in-CS1 book yet, though I'm sure that the small army of authors who have written Java-in-CS1 books may think differently.

    Even for languages and approaches that will never dominate CS1, a great textbook can be a defining landmark. As far as OOP in CS1 goes, I think that Conner, Nigidula, and van Dam's Object-Oriented Programming in Pascal is still the benchmark. More recently, Felleisen et al.'s How to Design Programs stakes a major claim for how to teach introductory programming in a new way. Its approach is very different from traditional CS1 pedagogy, though, and it hasn't had a galvanizing effect on the world yet.

    An agile software engineering text could allow us agile folks to teach software engineering in a new and provocative way. Many of us are teaching such courses already when we can, often in the face of opposition from the "traditional" software engineers in our departments. (When I taught my course last fall, the software engineering faculty argued strongly that the course should not count as a software engineering course at all!) I know of only agile software engineering text out there -- Steinberg and Palmer's Extreme Software Engineering -- but it is not positioned as the SE-complete text that Brian envisions.

    Closer to my own world, of course, is the need for a great patterns-oriented CS1 book of the sort some of us have been working on for a while. Such a text would almost certainly be more agile than the traditional CS1 text and so could provide a nice entry point for students to experience the benefits of an agile approach. We just haven't yet been able to put our money where our mouth is -- yet.

    On Brian's three notes:

    1. Using Refactoring and Test-Driven Development and various other readings can work well enough for an agile development course, but the need for a single text is still evident. First, having scattered materials is too much work for the more casual instructor charged with teaching "the agile course". Second, even together they do not provide the holistic view of software engineering required if this text is to convince CS faculty that it is sufficient for an introductory SE course.

    2. Yes and yes. Alternative forms of education such as apprenticeship may well change how we do some of our undergraduate curriculum, but no one should bet the survival of agile methods on the broad adoption of radically different teaching methods or curricula in the university. We are, as a whole, a conservative lot.

      That doesn't mean that some of us aren't trying. I'm chairing OOPSLA's Educators' Symposium again this year, and we are leading off our day Dave West and Pam Rostal's Apprenticeship Agility in Academia, which promises a firestorm of thinking about how to teach CS -- and software development and agility and ... -- differently.

    3. I have used Bill Wake's Refactoring Workbook as a source of lab exercises for my students. It is a great resource, as is Bill's website. But it isn't a software engineering textbook.

    Why "take over a computer science department"? To create a critical mass of agile-leaning faculty who can support one another in restructuring curricula, developing courses, writing textbook, experimenting with teaching methods, and thinking Big Thoughts. Being one among nine or 15 or 25 on a faculty means a lot of hard work selling a new idea and a lot of time isolated from the daily conversations that help new ideas to form and grow. OOPSLA and Agile 200x and SIGCSE only come once a year, after all. And Cedar Falls, Iowa, is far from everywhere when I need to have a conversation on agile software development right now. So is Raleigh, North Carolina, for that matter, when Laurie Williams could really use the sort of interaction that the MIT AI Lab has been offering AI scientists for 40 years.

    Accomplishing this takeover is an exercise left to the reader. It is a slow process, if even possible. But it can be done, when strong leaders of departments and colleges set their minds and resources to doing the job. It also requires a dose of luck.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    July 27, 2005 12:52 PM

    Getting Better

    Well, the Gringo did it again. Lance Armstrong finished his professional racing with a dominating yet relaxed performance in the 2005 Tour de France. (Our friend Santiago finished 51st. A disastrous Stage 14 did him in.)

    If you watched any of the interviews Lance gave over the last few days of the tour, you could see that he really enjoys racing, competing, and winning. He relished the challenge that the T-Mobile and CSC teams threw at him in the mountains. He would have enjoyed the raw challenge of scaling the Alps and Pyrenees had he ridden alone, but having teams of riders try to take his yellow jersey made them all the sweeter. And his wonderful closing time trial to cement his edge placed a perfect exclamation on his tour career.

    Watching Lance these last few years has reminded me that being good is fun, as the folks at Creating Passionate Users recently wrote:

    My running coach told me a few years ago, "It's just more fun when you're faster." I wasn't sure what he meant; I was just trying to get back in shape and do a decent 10K. But once I started training with much better runners, and began pushing myself and keeping my splits and timing my speed work... it was more fun. And it wasn't like I had any illusion of being competitive. Being better is just more fun.

    You mean running repeats is fun? Yes.

    No, not every moment. A few weeks ago, I had one of those days that, in isolation. wasn't as much fun as I would have liked. Getting back to form in my 1200m interval training has been hard this season. The week after the not-so-fun day, I managed to do 6x1200m, but the last one repeat was slower than I had wanted it to be. Last week, I just wasn't motivated to run fast, so I skipped the work-out in favor of a steadier 9.5-mile run.

    My fatigue last week was a surprise until I sat down to compute my paces for my runs the previous week. In addition to the 6x1200m work-out on Wednesday and a 10-mile tempo run on Friday, I had done an 18.5-miler on Sunday. No big deal... until I see that I ran it in 2:25:37 -- an average pace of 8:01 min/mile. That is my marathon goal pace for October!

    Being better is more fun. I have no illusions that I am fast, or that I will challenge even for an age group prize in the Twin Cities. This is about challenging myself, and getting better. It feels good.

    This morning, I hit the track at 5:15 AM, earlier than usual so that I could get a bit more done at the office before a four-day weekend mini-vacation. It was cool, unusual after a month of 90-degree highs and 70-degree lows, but perfect for a work-out. As the sun rose, I knew that today was finally a good day of repeats. As I started my fifth, I knew that I would reach the days goal. In the end, 7x1200m, all in target time or better, with the seventh repeat being the fastest of all.

    Being better is more fun. But without the local moments of less fun a few weeks ago, I wouldn't have been better today -- either in setting goals or achieving them. As I quoted Merlin Mann a few days ago in the context of knowing ourselves as well as we know our tools:

    Making improvements means change and often pain along the way. It's hard to get better, and good tools like these can definitely ease the journey. I guess I'm proposing you try to understand yourself at least as well as the widget you're hoping will turn things around.

    One interview moment with Lance Armstrong stuck with me last weekend. When asked what he would miss most about racing, he said that he would never again be in as good a shape as he is today -- and he would miss that most of all. Not the yellow jerseys or the wins or the accolades and adulation that come with them, but the simple state of being in the best possible shape. I know the feeling. Of course, I have an advantage on Lance -- I didn't start working hard at my running until a little more than two years ago, when I started training for my first marathon. That means my body, while old, is still relatively fresh. So is my mind. I hope to keep getting better for a few more years...


    Posted by Eugene Wallingford | Permalink | Categories: Running, Teaching and Learning

    July 26, 2005 10:38 AM

    Computer Science and Liberal Education

    University-level instructors of mathematics and science have become increasingly concerned about the level of preparation for their disciplines that students receive in elementary and high school. For example, Tall, Dark, and Mysterious and Learning Curves both describe life in the trenches teaching calculus and other entry-level math courses at the university. We in computer science see this problem manifest in two forms. We, too, encounter students who are academically unprepared for the rigors of computing courses. However, unlike general education math and science, CS courses are usually not a required part of the students' curriculum. As a result, we begin to lose students as they find computing too difficult, or at least difficult enough that it's not as much fun as students had hoped for. I believe that this is one of the ingredients in the decline of enrollment in computing majors and minors being experienced by so many universities.

    Yesterday, via Uncertain Principles, I encountered Matthew Crawford's essay, Science Education and Liberal Education. Crawford writes beautifully on the essential role of science in the liberal education, the essential role of liberal education in democracy, and the vital importance of teaching science -- and presumably mathematics -- as worthy of study in their own right, independent of their utilitarian value to society in the form of technology. A significant portion of the essay catalogs the shortcomings of the typical high school physics textbook, with its pandering to application and cultural relevance. I shall not attempt to repeat the essay's content here. Go read it for yourself, and enjoy the prose of a person who both understands science and knows how to write with force.

    Notwithstanding Crawford's apparent assertion that the pure sciences are noble and that computing is somehow base technology, I think that what he says about physics is equally true of computing. Perhaps our problem is that we in computer science too often allow our discipline to be presented merely as technology and not as an intellectual discipline of depth and beauty. As I've written before we need to share the thrill, that computing brings us -- not the minutia of programming or the features of the latest operating system or chip set. Surely, these things are important to us, but they are important to us because we already love computing for its depth and beauty.

    Just yesterday, I commented abjectly in e-mail to a colleague that only twelve incoming freshmen had declared majors in computing during this summer's orientation sessions at our university. He wrote back:

    I think we need to move away from presenting the CS major as a path to being a software drone in competition with India. We need to present it as leading edge, discovery based -- that is -- a science. I think too many students now see it as a software engineering nightmare -- a 40 year career of carefully punctuated cookbook code. Too boring for words.

    Perhaps it is time for us to shift our mode of thought in computer science education toward the model of liberal education a lá the sciences. We might find that the number of students interested in the discipline will rise if we proudly teach computing as an intellectual discipline. In any case, I suspect that the level of interest and commitment of the students who do study computing will rise when we challenge them and address their need to do something intellectually worthwhile with their time and energy.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    July 18, 2005 11:32 AM

    Lessons from 13 Books

    I've recently run across in several different places recommendations for Leonard Koren's Wabi-Sabi: for Artists, Designers, Poets & Philosophers, so I thought I'd give it a read. My local libraries don't have it, so I'm still waiting. While looking, though, I saw another book by Koren, called 13 Books : (Notes on the Design, Construction & Marketing of my Last...). The title was intriguing, so I looked for it in the stacks. The back cover kept my attention, so I decided to read it this weekend. It contained the opening sentences of the book:

    Authorship unavoidably implies a certain degree of expertise about the subject you are writing on. This has always troubled me because, although I have written numerous books on various subjects, I've never really considered myself an expert about anything. Recently, however, I had an encouraging realization. Surely I must know more about the making of the 13 books

    ... that he has written than anyone else! So he wrote this book, which consists of a discussion of each of his previous works, including his source of inspiration for the work, the greatest difficulty he faced in producing it, and one enduring lesson he learned from the experience.

    (This book ties back nicely to two previous entries here. First, it adds to my league-leading total for being the first reader of a book in my university library. Second, it was a gift to the library by Roy Behrens, a design professor here whose Ballast Quarterly Review I mentioned a few months ago.)

    13 Books is clearly the product of a graphic designer who likes to explore the interplay between text and graphic elements and who likes to make atypical books. It's laid out in a style that may distract some readers. But, within the self-referential narrative, I found some of Koren's insights to be valuable beyond his experience, in terms of software, creativity, and writing.

    On software

    Projects ultimately develop their own identity, at which point the creator has a limited role in determining its shape. Koren learned this when he felt compelled to include a person in one of his books, despite the fact that he didn't like the guy, because it was essential to the integrity of the project. I feel something similar when writing programs in a test-code-refactor rhythm. Whether I like a particular class or construct, sometimes I'm compelled to create or retain it. The code is telling me something about its own identity.

    Just from its brief appearance in this book, I can see how the idea of wabi-sabi found an eager audience with software developers and especially XPers. Koren defines wabi-sabi as "a beauty of things imperfect, impermanent, and incomplete... a beauty of things modest and humble..." In the face of changing requirements and user preferences, we must recognize that our software is ever-changing. If our sense of beauty is bound up in its final state, then we are destined to design software in a way that aims at a perfect end -- only to see the code break down when the world around it changes. We need a more dynamic sense of beauty, one that recognizes beauty in the work-in-progress, in the system that needs a few more features to be truly useful, in the program whose refactoring is part of its essence.

    Later in the book, Koren laments that making paper books is "retrograde" to his tech friends. He then says, "And the concept of wabi-sabi, the stated antithesis of digital this and digital that, was, by extrapolation, of negligible cultural relevance." I see no reason that wabi-sabi stands in opposition to digital creations. I sense it my programs.

    Finally, here is my favorite quote from the book that is unwittingly about software:

    The problem with bad craftsmanship is that it needlessly distracts from the purity of your communication; it draws away energy and attention; it raises questions in the reader's mind that shouldn't be there.

    Koren writes of font, layout, covers, and bindings. But he could just as easily be writing of variable names, the indentation of code, comments, ...

    On creativity and learning

    At least one of the thirteen books was inspired by Koren's rummaging through his old files, aimlessly looking at photos. We've seen this advice before, even more compellingly, in Twyla Tharp's "start with a box". (That reminds me: I've been meaning to write up a more complete essay on that book...)

    Taking on projects for reasons of perceived marketability or laziness may sometimes make sense, but not if your goal is to learn:

    The ideas for both books came too quickly and easily, and there was no subsequent concept of development. In my future books I would need to challenge myself more.

    In building software systems, in learning new languages, in adopting new techniques -- the challenge is where you grow.

    In retrospect, Koren categorized his sources of inspiration for his books. The split is instructive: 40% were the next obvious step in a process, 30% came from hard work, and 30% were the result of "epiphanies from out of the blue". This means that fully two-thirds of his books resulted from the work of of being a creator, not from a lightning bolt. Relying on flashes of inspiration is a recipe for slow progress -- probably no progress at all, because I believe that those flashes ultimately flow from the mind made ready by work.

    On writing and publishing

    Koren is a graphic designer for whom books are the preferred medium. Throughout his career, he has often been dissatisfied with power imbalance between creators and publishers. He is constantly on the look-out for a new way to publish. For many, the web has opened new avenues for publishing books, articles, and software with little or no interference from a publisher. The real-time connectedness of the web has even made possible new modes of publication such as the blog, with conversations as a medium for creating and sharing ideas in a way. Blogs are often characterized as being ephemeral and light, but I think that we'll all be referring to Martin Fowler's essays and the Pragmatic Programmers' articles on their blogs for years to come.

    While Koren may remain a bookmaker, and despite his comments against digital technology as creative medium, I think his jangling, cross-linked, quick-hit style would play well in a web site. It might be interesting to see him produce an on-line work that marries the two. Heck, it's been been done with PowerPoint.

    As someone who has been reflecting a year writing this blog, I certainly recognize the truth in this statement:

    A book need be grand neither in scale nor subject matter to find an audience.

    Waiting until I have something grand to say is a sure way to paralyze myself.

    Finally, Koren offered this as the enduring lesson he learned in producing his book Useful Ideas from Japan:

    Reducing topical information to abbreviated humorous tidbits is a road to popular cultural resonance.

    It seems that Koren has the spirit of a blogger after all.


    Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

    July 14, 2005 6:48 PM

    The Tipping Point for Agile Software Development

    While out running today, I had one of those flashes of inspiration -- a crystal-clear, wholly-formed thought -- for how I might introduce agile software development in an undergraduate course. In this image, we begin the first day of a course with a software project to implement. The first thing we do is work as a group to decompose the into chunks that the students believe they can implement in one day. No Planning Game jargon; just a bunch of students working with me to design the course project and the programming assignments they will have to do. On another day, we could work on one of the day-long projects, breaking it down further and writing a test for each small piece before we write any code. No TDD or JUnit jargon; just a bunch of folks writing short test programs for code they think they understand how to write.

    I'm not sure why this flash happened today. I'm not slated to teach agile software development per se for a while. The last time I taught the offers some reason that my mind would seek a new way to open the course. Then, I *talked about* agile development first, and we began to work on agile practices with test-first development first. At the end of the course, I felt as if too few of the students had grokked the idea, at least in part before they never felt motivated to give it a reasonable shot. I don't mean that the students necessarily started with a desire not to get it; rather, they never felt a strong internal desire to endure the pain of changing their habits for building software. And old habits die hard, if at all.

    This feeling brings to mind something I read a couple of weeks ago:

    People don't choose rationally to listen to your message and then have a feeling about it. They choose to listen to your message because they have a feeling about it.

    University instructors and industrial trainers should keep this thought close to their minds. The folks at Creating Passionate Users know that it is hard to spark passion in readers or product users when they have no particular feeling for the work. The same is true for many students in a course. I may be able to draw in a few students slowly over time, as things click in their minds, but for most I need to help them want to learn and know and do. This is especially true of helping people to change deeply ingrained habits,such as how they develop software.

    Then what should I read today but this quote from Malcolm Gladwell's The Tipping Point, over at Agile Advice, presented in just the context of my inspirational moment:

    ...the content of the message matters, too. And the specific quality that a message needs to be successful is the quality of "stickiness." Is the message - or the food, or the movie, or the product - memorable? Is it so memorable, in fact, that it can create change, that it can spur someone to action?

    I need to find and communicate better the stickiness of agile development. My running thought seems closer to agile development's stickiness than what I've done before.

    If my thoughts were controlled more by my rational side, I would be having flashes of inspiration for teaching my programming languages course this fall. What is the stickiness of functional programming, of Scheme? How can I shape a mindset in my students whereby they feel passion to learn these new ideas and change how they think about programs?

    Maybe I need to go for another run and cross my fingers.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    July 06, 2005 8:23 PM

    Too Many Variables

    Do you remember this old Billy Crystal/Christopher Guest skit from Saturday Night Live? The guys were janitors. When they ran into one another, they would take turns describing to one another accidents that had happened to them. The first incidents in the exchange were the sort that could happen to a working guy, such as "You know when you're working in the shop and you hit your thumb with a hammer?" But as the skit progresses, the accidents become incidents of strange, self-inflicted pain that could never happen accidentally, such as "You know when stick your inside your car door and just slam the door right on your head? That really hurts." The unforgettable catch phrase of the skit was this classic: "I *hate* when that happens."

    Interval training on a track can be like that. I imagine most non-runners listening to me tell tales of my track work-outs and thinking of me the way we all thought of Guest and Crystal at the end of their skits. "I *hate* when that happens." Well, duh. It's all quite funny, unless your the guy with his head crushed by the car door.

    When I run intervals, or repeats, I am trying to work my body at the edge of its capabilities. As a result, there is little margin for error or unexpectedness. When things don't go as well as expected, the workout can feel something like slamming a car door on my head -- voluntarily, at 5:30 or 6:00 in the morning. I hate when that happens.

    Doing my 6x1200m workout this morning, I re-learned what all good experimental scientists know: too many free variables make it difficult to know why what happened happened when what happened isn't what you expected.

    What happened? I came up way short today. I was trying to run each repeat in 4:52 or less. The first was tough but right on mark. The second slowed down by 3 seconds and felt bad enough that I decided to jog lightly through the third. When I ran the fourth, I slowed down another 2 seconds and realized that I was going to be able to meet my goal for the day. In place of the fifth and sixth repeats, I chose to alternate faster laps with slower ones, in hopes of not turning the day into just another slow jog.

    But why did this happen? Here are some possibilities:

    • I tried to do too many repeats.
    • I tried to run each repeat too fast.
    • I ran too short a recovery period in between repeats.
    • I wasn't ready to run my repeats outdoors on the 400m track just yet.
    • My repeats were too long because I was not running the inside lane of the track.
    • I am still feeling lingering effects of my recent half marathon, two hard workouts last week too soon after the half, and a moderately fast 14-miler on Sunday?

    Running outside itself wasn't likely the problem, though the nature of the feedback is different. Attempting six repeats wasn't likely the problem, either, because the problem wasn't with Repeat 6; it was Repeat 3, or even #2.

    I think the most likely explanation is the combination of three variables. First, my legs are still tired from last week. Second, I tried running 400m recoveries instead of the more ordinary 600m (50% of the repeat distance). I will try to remedy those next week.

    Finally, and perhaps most important, I now realize that I was running repeats longer than 1200m. Last week's 4:53 repeats were right at my target distance, because I was running to lane markers on the indoor track. This morning, I was running three laps in Lane 4 of the 400m outdoor track. Four laps in Lane 4 is actually about 1.05 miles, so my three laps work out as a little over 1266m. That extra 66m is enormous when it comes to running at my limits. To do my target 1200m pace, I should have allowed myself an extra 16 seconds on each repeat!

    The idea that my laps were longer than planned didn't occur to me at all until I was out on the track, slogging through laps, asking myself, "But why?" I *hate* when that happens.

    I should have taken the feedback from my body at face value and adapted my pace. Whatever the reason, I was not going to be able to do 1:37 laps, so I should have eased off to a pace that I could sustain. Instead, I despaired a bit and gave up on a couple of the repeats. Note to self: Feedback is good; use it to get better.

    Multiply these three factors together, and you get a workout that does not go as planned.

    Then again, in retrospect, maybe my times weren't so bad after all. After crunching the numbers, I think that I can safely conclude that I was simply trying to run too fast.

    Unfortunately, things don't usually turn out so tidily. Ordinarily, I wouldn't know for certain the reason that the workout that did not go as planned, because I put too many variables into play. What I don't want to do is use my good fortune this week as rationalization for making the same mistake next week.

    My excuse, er, reason, for changing so many things at once is that training time is precious. From last Sunday, I had exactly 13 weeks until the Twin Cities Marathon. If I hope to meet my race goals, I need to make steady and rapid progress in my workouts.

    That is just the sort of reason that we software developers use to convince ourselves and our clients that we need to shove more and more features into the next release. It's the same excuse that teachers tell themselves and their students when they try to squeeze just one more topic into the already crowded syllabus of a course. The results are similar, too. The developers and instructors often fail to achieve their goals; software clients and students are left holding the bag. And then in the end, we are left asking ourselves why.

    Of course, this morning's experience also taught me another lesson: do my homework better when it comes to computing repeat distances on the track. "Do your homework" is, of course, also a fine piece of advice for software developers, software clients, teachers, and students alike. :-)


    Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

    July 06, 2005 12:53 PM

    What Do I Know About Teaching Programming?

    A week or so ago, I ran across Adam Connor's blog entry What do we know about teaching programming skills?. I wanted to respond immediately, either in a comment there or in a more complete essay here. But then I realized: I don't have anything succinct to say. As much as I think about teaching programming, and discuss it with friends, and write about facets of it here, I don't have a broadside that I can offer to folks like Adam who seek a concise, straightforward introduction to what we know about teaching programming. This realization disappointed me.

    For now, I can offer only a few points that seem to be recurring themes in how I understand how to teach programming. Later I will write up something that Adam and people in his position can use right away. Whether that will be in time to help Adam much, I don't know...

    In no particular order:

    • Concrete examples matter.

    • Practice, practice, practice!

    • Reading code helps us to write better code.

    • Expert programmers work top-down and bottom-up, but novices seem to learn best when working bottom-up first.

    • Students learn patterns of programs, whether or not you think about them or teach them explicitly. You are better suited to design your examples and instruction around the patterns you want them to learn, and then help them to name the patterns. A side benefit: you and your students will share a growing design vocabulary!

    • Read what you can by folks like Eliot Soloway and Robert Rist to learn about the educational psychology of programming.

    That's a start. You know what they say: start small, then grow.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    July 04, 2005 8:03 PM

    The Lesson the Gringo Taught Me

    The beginning of the Tour de France on Saturday reminded me of this diary entry by Santiago Botero, a Colombian rider in cycling's Tour de France, the most grueling of athletic events.

    There I am all alone with my bike. I know of only two riders ahead of me as I near the end of the second climb on what most riders consider the third worst mountain stage in the Tour. I say 'most riders' because I do not fear mountains. After all, our country is nothing but mountains. I train year-round in the mountains. I am the national champion from a country that is nothing but mountains. I trail only my teammate, Fernando Escartin, and a Swiss rider. Pantani, one of my rival climbers, and the Gringo Armstrong are in the Peleton about five minutes behind me. I am climbing on such a steep portion of the mountain that if I were to stop pedaling, I will fall backward. Even for a world class climber, this is a painful and slow process. I am in my upright position pedaling at a steady pace, willing myself to finish this climb so I can conserve my energy for the final climb of the day. The Kelme team leader radios to me that the Gringo has left the Peleton by himself and that they can no longer see him.

    I recall thinking 'the Gringo cannot catch me by himself'. A short while later, I hear the gears on another bicycle. Within seconds, the Gringo is next to me - riding in the seated position, smiling at me. He was only next to me for a few seconds and he said nothing - he only smiled and then proceeded up the mountain as if he were pedaling downhill. For the next several minutes, I could only think of one thing - his smile. His smile told me everything. I kept thinking that surely he is in as much agony as me, perhaps he was standing and struggling up the mountain as I was and he only sat down to pass me and discourage me. He has to be playing games with me.

    Not possible. The truth is that his smile said everything that his lips did not. His smile said to me, 'I was training while you were sleeping, Santiago'. It also said, 'I won this tour four months ago, while you were deciding what bike frame to use in the Tour. I trained harder than you did, Santiago. I don't know if I am better than you, but I have outworked you and right now, you cannot do anything about it. Enjoy your ride, Santiago. See you in Paris.'

    Obviously, the Gringo did not state any of this. But his smile did dispel a bad rumor among the riders on the tour. The rumor that surfaced as we began the Prologue several days ago told us that the Gringo had gotten soft. His wife had given birth to his first child and he had won the most difficult race in the world - he had no desire to race, to win. I imagine that his smile turned to laughter once he was far enough not to embarrass me. The Gringo has class, but he heard the rumors - he will probably laugh all the way to Paris. He is a great champion and I must train harder. I am not content to be a great climber, I want to be the best.

    I learned much from the Gringo in the mountains. I will never forget the helpless feeling I had yesterday. If I ever become an international champion, I will always remember the lesson the Gringo taught me.

    Botero wrote that entry back in 2000, the year after Armstrong won his first Tour. He went on to win that Tour as well, on his way to a record six in a row. This year, Armstrong rides his last Tour de France, seeking to retire as a champion yet again. By all accounts, he still feels the fire to win. More important, he felt the fire to prepare to win all winter and spring.

    Botero finished sixth in the 2000 Tour. He is racing this year's Tour, too, and his recent win at Romandie signals that he may have what it takes to challenge again for the title in France.

    Botero learned what separates himself from Armstrong from the master himself. The will to prepare to win comes from within, and sometimes it's hard to appreciate until you see its power first hand. Even then, achieving excellence exacts a heavy commitment. How much do you want it?


    Posted by Eugene Wallingford | Permalink | Categories: Running, Teaching and Learning

    June 29, 2005 8:40 AM

    Learning from the Masters

    This spring I was asked to participate on a panel at XP2005, which recently wrapped up in the UK. This panel was on agile practices in education, and as you may guess I would have enjoyed sharing some of my ideas and learning from the other panelists and from the audience. Besides, I've not yet been to any of the agile software development conferences, and this seemed like a great opportunity. Unfortunately, work and family duties kept me home for what is turning out to be a mostly at-home summer.

    In lieu of attending XP2005, I've enjoyed reading blog reports of the goings-on. One of the highlights seems to have been Laurent Bossavit's Coding Dojo workshop. I can't say that I'm surprised. I've been reading Laurent's blog, Incipient(thoughts), for a while and exchanging occasional e-mail messages with him about software development, teaching, and learning. He has some neat ideas about learning how to develop software through communal practice and reflection, and he is putting those ideas into practice with his dojo.

    The Coding Dojo workshop inspired Uncle Bob to write about the notion of repeated practice of simple exercises. Practice has long been a theme of my blog, going back to one of my earliest posts. In particular, I have written several times about relatively small exercises that Joe Bergin and I call etudes, after the compositions that musicians practice for their own sake, to develop technical skills. The same idea shows up in an even more obviously physical metaphor in Pragmatic Dave's programming katas.

    The kata metaphor reminds us of the importance of repetition. As Dave wrote in another essay, students of the martial arts repeat basic sequences of moves for hours on end. After mastering these artificial sequences, the students move on to "kumite", or organized sparring under the supervision of a master. Kumite gives the student an opportunity to assemble sequences of basic moves into sequences that are meaningful in combat.

    Repeating programming etudes can offer a similar experience to the student programmer. My re-reading of Dave's article has me thinking about the value of creating programming etudes at two levels, one that exercises "basic moves" and one that gives the student an opportunity to assemble sequences of basic moves in the context of a more open-ended problem.

    But the pearl in my post-XP2005 reading hasn't been so much the katas or etudes themselves, but one of the ideas embedded in their practice: the act of emulating a master. The martial arts student imitates a master in the kata sequences; the piano student imitates a master in playing Chopin's etudes. The practice of emulating a master as a means to developing technical proficiency is ubiquitous in the art world. Renaissance painters learned their skills by emulating the masters to whom they were apprenticed. Writers often advise novices to imitate the voice or style of a writer they admire as a way to ingrain how to have a voice or follow a style. Rather than creating a mindless copycat, this practice allows the student to develop her own voice, to find or develop a style that suits their unique talents. Emulating the master constrains the student, which frees her to focus on the elements of the craft without the burden of speaking her own voice or being labeled as "derivative".

    Uncle Bob writes of how this idea means just as much in the abstract world of software design:

    Michael Feathers has long pondered the concept of "Design Sense". Good designers have a "sense" for design. They can convert a set of requirements into a design with little or not effort. It's as though their minds were wired to translate requirements to design. They can "feel" when a design is good or bad. They somehow intrinsically know which design alternatives to take at which point.

    Perhaps the best way to acquire "Design Sense" is to find someone who has it, put your fingers on top of theirs, put your eyeballs right behind theirs, and follow along as they design something. Learning a kata may be one way of accomplishing this.

    Watching someone solve a kata in a workshop can give you this sense. Participating in a workshop with a master, perhaps as programming partner, perhaps as supervisor, can, too.

    The idea isn't limited to software design. Emulating a master is a great way to learn a new programming language. About a month ago, someone on the domain-driven design mailing list asked about learning a new language:

    So assuming someone did want to want to learn to think differently what would you go with? Ruby, Python, Smalltalk?

    Ralph Johnson's answer echoed the importance of working with a master:

    I prefer Smalltalk. But it doesn't matter what I prefer. You should choose a language based on who is around you. Do you know somebody who is a fan of one of these languages? Could you talk regularly with this person? Better yet, could you do a project with this person?

    By far the best way to learn a language is to work with an expert in it. You should pick a language based on people who you know. One expert is all it takes, but you need one.

    The best situation is where you work regularly with the expert on a project using the language, even if it is only every Thursday night. It would be almost as good if you would work on the project on your own but bring code samples to the expert when you have lunch twice a week.

    It is possible to learn a language on your own, but it takes a long time to learn the spirit of a language unless you interact with experts.

    Smalltalk or Scheme may be the best in some objective (or entirely subjective!) sense, but unless you can work with an expert... it may not the right language for you, at least right now.

    As a student programmer -- and aren't we all? -- find a person to whom you can "apprentice" yourself. Work on projects with your "master", and emulate his style. Imitate not only high-level design style but also those little habits that seem idiosyncratic and unimportant: name your files and variables in the same way; start your programming sessions with the same rituals. You don't have to retain all of these habits forever, and you almost certainly won't. But in emulating the master you will learn and internalize patterns of practice, patterns of thinking, and, yes, patterns of design and programming. You'll internalize them through repetition in the context of real problems and real programs, which give the patterns the richness and connectedness that make them valuable.

    After lots of practice, you can begin to reflect on what you've learned and to create your own style and habits. In emulating a master first, though, you will have a chance to see deeper into the mind and actions of someone who understands and use what you see to begin to understand better yourself, without the pressure of needing to have a style on your own yet.

    If you are a computer scientist rather than a programmer, you can do much the same thing. Grad students have been doing this as long as there have been grad students. But in these days of the open-source software revolution, any programmer with a desire to learn has ample opportunity to go beyond the Powerbook on his desk. Join an open-source project and interact with a community of experts and learners -- and their code.

    And we still have open to us an a more traditional avenue, in even greater abundance, literature. Seek out a writer whose books and articles can serve in an expert's stead. Knuth, Floyd, Beck, Fowler... the list goes on and on. All can teach you through their prose and their code.

    Knowing and doing go hand in hand. Emulating the masters is an essential part of the path.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

    June 24, 2005 8:26 AM

    Situational Leadership ®

    Update (08/22/08): Situational Leadership ® is a registered trademark of the Center for Leadership Studies, Inc. This accounts for use of the ® symbol and the capitalization of the phrase throughout the entry.

    The more I think, read, and learn about management and leadership, the more I believe that managers will succeed best if they invert the traditional pyramid that has the president of a company at the top, managers in the middle, and employees on the bottom. I wrote about my credo of leader-as-servant in a recent essay. Now consider this quote:

    ... managers should work for their people, ... and not the reverse. ... If you think that your people are responsible and that your job is to be responsive, you really work hard to provide them with the resources and working conditions they need to accomplish the goals you've agreed to. You realize your job is not to do all the work yourself or sit back and wait to 'catch them doing something wrong', but to roll up your sleeves and help them win. If they win, you win.

    Leadership and the One-Minute Manager

    This is from Page 18 of "Leadership and the One-Minute Manager", by Kenneth Blanchard, Patricia Zigarmi, and Drea Zigarmi. I've not read Blanchard's "One-Minute Manager" yet, but I ran across this thin volume in the stacks while checking out some other books on leadership and figured it would be worth a read. I've read some of Blanchard's stuff, and I find his staged storytelling approach an enjoyable way to learn material that might otherwise be pretty dry.

    This book extends the principles of the one-minute manager to "situational leadership", the idea that managers should adapt how they work with employees according to the context, specifically the task to be done and the employee's level of competence and confidence with the task. This approach emphasizes communication between manager and the employee, which allows the two to agree on a leadership style most suitable for the situation.

    On this approach, leadership style is "how a manager behaves, over time, when trying to influence the performance of others". The authors are careful to remind me that my style is not what I say my style is, or how I mean to behave, but how others say I behave. This reminds of a basic principle of object-oriented programming: a message is interpreted in the context of the receiver, not the sender. The same principle applies to human communication.

    In order to be a good situational leader, a manager must have three skills:

    • diagnosis, the ability to determine the needs of employees according to their development level with the task at hand
    • flexibility, the ability to apply one of four different styles, depending on the diagnosis, and
    • contracting, the ability to work with employee to set the parameters of their interaction

    Diagnosis

    An employee's development level and performance are a product of two variables, which Blanchard call 'competence' and 'commitment'. I tend to think of these variables in terms of skill and confidence. Whatever the terms, managers would do well to think about these variables when facing inadequate performance in the workplace. Performance problems are usually competence problems, commitment problems, or both.

    Blanchard asserts that most people follow a typical path through this two-dimensional space on the way from being a novice to being an expert:

    • Level 1, low competence and high commitment
    • Level 2, some competence and low commitment
    • Level 3, high competence and variable commitment
    • Level 4, high competence and high commitment

    The odd change in commitment level follows the psychology of learners. Novices begin with a high level of commitment, eager to learn and willing to be instructed. But, "as people's skills grow, their confidence and motivation drop". Why? It's easy to become discouraged when you realize just how much you have left to learn, how much work it will take to become skilled. After commitment bottoms out, it can recover as the learner attains a higher level of competence and begins to see the proverbial light at the end of the tunnel. Blanchard's whole approach to Situational Leadership ® is for the leader to adapt her style to the changes of the developing learner in a way that maximizes the chance that the learner's commitment level recovers and grows stronger over time. That is essential if the manager hopes for the learner to stick with the task long enough to achieve a high level of competence.

    I accept this model as useful because I have observed the same progression in my students, especially when it comes to problem solving and programming. They begin a course eager to do almost anything I ask. As they learn enough skills to challenge harder problems, they begin to realize how much more they have to learn in order to be able to do a really good job. Without the right sort of guidance, university students can lose heart, give up, change their majors. With the right sort of guidance, they can reverse emotional course and become powerful problem solvers and programmers. How can instructor provide the right sort of guidance?

    balancing on the high wire

    As an aside, I have a feeling I'll be approaching Development Level 2 soon with my new administrative tasks. At times, I have a glimpse of how hard it will be to manage all the information coming at me and to balance the number of different activities and egos and goals that I encounter. Maybe a little self-awareness will help me combat any sudden desire to cry out as Job. :-)

    Flexibility

    The way a manager or instructor can provide the right sort of guidance to a particular employee at a particular point in time is to choose a leadership style that fits the context. There are four basic styles of leadership available to the situational leader:

    • directing
    • coaching
    • supporting
    • delegating

    Like development levels, the leadership styles are a combination of two variables: directive behavior, by which the manager carefully structures the environment and closely supervises the employee, and supportive behavior, by which the manager praises, listens, and helps the employee. The idea is to choose the right combination to meet the needs of the employee on a particular task at a particular moment in time. As the authors say, "Leaders need to do what the people they supervise can't do for themselves at the present moment."

    We can think of the four leadership styles as occupying quadrants in a 2x2 grid:

    the four styles of situational leadership

    To match the unusual path that most people follow through the levels of development, a situational leader needs to follow a particular path through the two-dimensional space of leadership styles, in the order listed above: directing for Level 1, coaching for Level 2, supporting for Level 3, and delegating for Level 4. While the actual progression will depend on the specific employee, one can imagine the stereotypical path to be a bell-like curve from the lower left of the grid, up through the top quadrants, down through the lower right of the grid:

    • As the employee's competence increases, the leader slowly reduces the degree of directive behavior, eventually to 0.

    • As the employee's confidence wanes, the leader slowly ratchets up the degree of support provided. This helps the employee get through periods of doubt and eventually regain a confidence born out of competence. When that confidence arrives, the leader slowly reduces the degree of support provides until the employee is managing himself.

    This progression can help students, too. They usually need a level of coaching and support that increases throughout the first year, because they can become discouraged when their skills don't develop as quickly as they had hoped. At the same time, the instructor continues to structure the course carefully and direct their actions, though as the year progresses the students can begin to shoulder more of the burden for deciding what to do when.

    Communication remains important. Managers have to tell employees what they are doing, or they are likely to misinterpret the intention behind the leadership style. An employee who is being directing will often interpret the manager's constant attention as "I don't trust you", while an employee who is being delegated to may interpret the manager's lack of attention as "I don't care about you". When people lack information, they will fill in the blanks for themselves.

    Contracting

    That's where contracting comes in. The situational leader communicates with the people he supervises throughout the entire process of diagnosis and style selection. The person being supervised should be in agreement with the supervisor on the development level and thus the leadership style. When they disagree on development level, the supervisor should generally defer to the employee, on the notion that the employee knows himself best. The manager may then contract for a bit closer supervision over the short-term in order to confirm the diagnosis. When they disagree on leadership style, the employee should generally defer to the manager, who has developed experience in working with employees flexibly. Again, though, the manager may then agree to revisit the choice in a short time and make adjustments based on the employee's performance and comfort.

    Communication. Feedback. Adaptation. This all sounds 'agile' to me.

    ....

    Blanchard relates the Situational Leadership ® approach to "teaching to the exam":

    Once your people are clear on their goals..., it's your job to do everything you can to help them accomplish those goals ... so that when it comes to performance evaluation ..., they get high ratings....

    For Blanchard, teaching to the exam is a good idea, the right path to organizational and personal success. I tend to agree, as long as the leader's ultimate goal is to help people to develop into high competence, high commitment performers. The goal isn't to pass the exam; the goal is to learn. The leader's job is to help the learner succeed.

    Like the agile methods, this process is a low ceremony but high in individual discipline, on the part of both the leader and the learner.

    What should the situational leader do when a novice's performance is way out of bounds?

    You go back to goal setting. You say, 'I made a mistake. I must have given you something to do that you didn't understand. Let's backtrack and start again.'

    Trainers have to be able to praise, and they have to be able to admit to mistakes. We teachers should keep this in mind -- we aren't often inclined to admitting that we are the reason students aren't succeeding. Instructors do face a harder task that Blanchard's managers in this regard, though. Situational Leadership ® is based on one-on-one interactions, but an instructor may have a class of 15 or 30 or 100, with students at various levels of development and performance and confidence. In an ideal world, all teacher-student interactions might be one-on-one or in small groups, but that's not the world we live in right now.

    As I read books like this one, I have to keep in mind that my context as teacher and future department head is somewhat different from the authors'. A department head is not in the same relationship to the faculty as a manager is to employees. The relationship is more "first among equals" (my next leadership book to read...). Still, I find a lot of value in learning about how to be more self-aware and flexible in my interactions with others.


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Teaching and Learning

    June 23, 2005 6:28 PM

    Developing Empathy

    One thing I have noticed in my last few weeks preparing to move into the Big Office Downstairs: I view the actions of the administrators around me in a different light. Where I might have reacted immediately to some behavior, often negatively, I now am a bit more circumspect. What could make that seem the right thing to do? If nothing else, I am aware that I will soon be in the position of having to make such decisions, and it probably looks easier to do than it is. Kind of like playing Jeopardy! from the comfort of your own home... even Ken Jennings is an easy mark when you're sitting on your sofa.

    Swapping roles is a great way to develop empathy for others. This is certainly true for students and teachers. I do't know how many students who, after having to teach a short course at work or having to lecture in place of a traveling advisor, have told me, "I never knew how hard your job was!" Those students tend to treat their own instructors differently thereafter.

    Playing many different roles on a software team can serve a similar purpose. Developers who have tested or documented software often appreciate the difficulties of those jobs more than "pure" developers. Of course, playing different roles can help software people do more than develop empathy for their teammates; it can help them build skills that help them do all the jobs better. Writing and testing code come to my mind first in this regard.

    Empathy is a good trait to have. I hope to have more of it -- and put it to good use -- as a result of my new experience.


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

    June 12, 2005 5:06 PM

    On Making Things Up

    As I think I've mentioned here before, I am a big fan of Electron Blue, a blog written by a professional artist who has taken up the study of physics in middle age. Her essays offer insights into the mind of an artist as it comes into contact with the abstractions of physics and math. Sometimes, the two mindsets are quite different, and others they are surprisingly in tune.

    I just loved this line, taken from an essay on some of the similarities between a creator's mind and a theoretical scientists:

    When I read stuff about dark energy and string theory and other theoretical explorations, I sometimes have to laugh, and then I say, "And you scientists think that we artists make things up!"

    Anyone who has done graduate research in the sciences knows how much of theory making is really story telling. We in computer science, despite working with ethereal computation as our reality, are perhaps not quite so make-believe as our friends in physics, whose efforts to explain the workings of the physical world long ago escaped the range of our senses.

    Then again, I'm sure some people look at XP and think, "It's better to program in pairs? It's better to write code with duplication and then 'refactor'? Ri-i-i-i-ght." What a story!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    June 09, 2005 6:19 PM

    Department Head as Teacher

    Some folks have expressed concern or even dismay that my becoming department head will pull me away from teaching. Administration can't be as much fun as the buzz of teaching, with its paperwork and meetings and bureaucracy. And there's no doubt that teaching one course instead of three will create a different focus to my days and weeks.

    But the more I prepare for my move to the Big Office Downstairs, the more I realize that -- done well -- a head's job involves a fair amount of teaching, too, only in a different form and to broader audiences.

    To address the problem of declining enrollments in our majors, we as a department need to educate students, parents, and high school counselors that this is the best time ever to major in computing. To ensure that the department has access to the resources it needs to do its job effectively, we as a department must educate deans, provosts, presidents, and state legislatures about the nature of the discipline and its needs. And that's just the beginning. We need to help high schools know how to better prepare students to study computer science at the university. We need to take educate the general public on issues where computing intersects the public interest, such as privacy, computer security, and intellectual property.

    These opportunties to teach are all about what computing is, does, and can be. They aren't one of those narrow and somewhat artificial slices of the discipline that we carve off for our courses, such as "algorithms" or "operating systems". They are about computing itself.

    The "we"s in the paragraph above refer to the department as a whole, which ultimately means the faculty. But I think that an important part of the department head's job is to be the "royal we", to lead the department's efforts to educate the many constituencies that come into contact with the department's mission -- suppliers, consumers, and everyone in between.

    So, I'm learning more about the mindset of my new appointment, and seeing that there will be a fair bit of education involved after all. I'm under no illusion that it will be all A-ha! moments, but approaching the job with an educator's mind should prepare me to be a more effective leader for my department. The chance to educate a broader audience about computer science and its magic should be a lot of fun. And, like teaching anything else, the teaching itself should help me to learn a lot -- in this case, about my discipline and its role in the world. Whether I seek to remain in administration or not, in the long run that should make me a better computer scientist.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Managing and Leading, Teaching and Learning

    June 07, 2005 1:44 PM

    Turning Students onto Entrepreneurship

    Modern invention has been a great leveler.
    A machine may operate far more quickly
    than a political or economic measure to
    abolish privilege and wipe out the distinctions
    of class or finance.

    -- Ivor Brown, The Heart of England

    I finally read Paul Graham's newest essay, Hiring is Obsolete, this weekend. I've been thinking about the place of entrepreneurship in my students' plans recently myself. When I gave my interview presentation to our faculty when applying to be department head, I talked about how -- contrary to enrollment trends and popular perception -- this is the best time ever to go into computer science. One of the biggest reasons is the opportunity for ambitious students to start their own companies.

    Philip Greenspun related an illustrative story in his blog:

    The engineering staff at Google threw a big party for Silicon Valley nerds last Thursday night [May 5], ...

    Larry Page, one of the founders, gave an inspiring talk about what a great time this is to be an engineer. He recalled how at one point Google had five employees and two million customers. Outside of Internet applications it is tough to imagine where that would be possible. Page also talked about the enjoyment of launching something, getting feedback from users, and refining the service on the fly. ...

    This sounds like the same sort of experience that Graham touts from ViaWeb.

    Admittedly, not every start-up will be a ViaWeb or a Google, but that's not the point. The ecosphere is rife with opportunities for small companies to fill niches not currently being served by larger companies. Not all such companies are doing work specifically for the web, but the web makes it possible for them to make their products visible and available. The web reduces a company's need for some of the traditional trappings of a business, such as a large, dedicated sales staff.

    The sort of entrepreneurship Graham touts is more common in Silicon Valley and the Route 128 corridor, and more generally in large metropolitan areas, but Grahams's advice applies even here in the Great Midwest -- no, not "even here", but especially here. The whole point of the web's effect on business is that almost anyone almost anywhere can now create a business that does something no one else is doing, or does something others are doing but better, make a lot of money. Ideas and hard work are now more important than location or who you know.

    UNI students have had a few successes in this regard. I keep in close touch with one successful entrepreneur who is former student of ours. When he was a student here, he already exhibited the ambition that would lead to his business success. He read broadly on the web and software and technology. He asked questions all the time. By the time he left UNI, he had already started a web hosting company with a vision to do things differently and better. I love to visit his place company, give whatever advice I can still give, and learn from him and what he is doing.

    Back in the old days, most people would have moved to New York or San Francisco in order to start his first company -- because that's "where the action was". I'm sure that some people told him that he should move to Chicago or at least Minneapolis to have a chance to succeed. But he started his company right here in little ol' Cedar Falls, Iowa, and did just fine. He can enjoy the life available in a small city in a relatively rural part of America. His company's building is ten feet from a paved bike trail that circles a small lake and connects into a system of miles and miles of trails. His margins can be lower because his costs of doing business are lower. And working with the growing tech community here he can dream as big as he likes and is willing to work.

    This guy hasn't made it big like Graham or Page or Gates, but he is one example of the bountiful opportunities available to students studying at schools like UNI throughout the world. And he could never have learned as much or done as much if he had followed the steady flow of our students to the big-box insurance companies and service businesses that hire most of our students.

    How can we -- instructors and the world at large -- help students appreciate that the "cage is open", as Graham describes the Brave New World of business? The tendency of most university professors is to offer another course :-). When I was a grad student at Michigan State, I sat in on a course during my last quarter that was being offered jointly by the Colleges of Engineering and Business to teach some essential skills of the entrepreneurial engineer. I wish that it had come earlier in my studies because by then my mind was set on either going corporate (AI research with a big company like Ford or Price Waterhouse) or going academic.

    There is certainly some value in incorporating this kind of material into our curricula and maybe even offering stand-alone courses with an entrepreneurial bent. But this transition in how the world works is more about attitude and awareness than the classroom. Students have to think of starting a company in the same they think of going to work for IBM or going to grad school, as a natural option open to everyone. Universities will serve students better by making starting their own companies a standard part of how we talk about their futures and of the futures we expose them to.

    There are some particular skills that universities need to help students develop, beyond what we teach now. First and foremost is the ability to identify problems with economic potential. We are pretty good at helping students learn to identify cool problems with academic potential, because that's what we do when we do our own research. But a problem of basic academic interest rarely results in a program or service that someone would pay for, at least not enough someones to make it reasonable as the basis for a commercial venture. Graham gives some advice in his various essays on this topic, and the key almost always comes down to "Know users." Only by observing carefully people who are doing real work are we likely to stumble upon those inefficiencies that they would be willing to pay to make go away. Eric Sink has also written some articles useful to folks who are thinking about starting their own software house.

    The other things we teach students are still important, too. A start-up company needs programmers, people who know how to develop software well and who have strong analytic skills. "The basics" such as data structures, algorithms, and programming languages are still the basics. Students just need to have a mindset in which they look for ways to use these skills to solve real problems that real users have -- problems that no one else is solving for them yet.

    Hiring is Obsolete has more to say than just that students should consider being entrepreneurs. In particular, Graham talks about the opportunities available to large companies in an ecosphere in which start-ups do initial R&D and identify the most capable software developers. But I think that these big companies will take care of themselves. My interest is more in what we can do better in the university, including what we can do to get folks to see what a wonderful time this is to study computer science.

    I think I should take my next sabbatical working for one of my former students...


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    June 03, 2005 1:57 PM

    Reaping What You Sow

    I ran into this quote over at Ben Hyde's blog:

    Customers have a tendency to become like the kind of customers you treat them.

    Ben related the quote as a commentary on trust in commerce. (Trust and social relationships are ongoing themes of his blog.) He notes that he has observed this truth in many situations. I have, too. Indeed, I think this truth applies in almost all human relationships.

    (Like all generalizations, this one isn't foolproof, so feel free to prefix each of the following claims with "by and large" or your favorite waffle words.)

    Parents and Children

    Children grow into the people you expect them to be. The best sort of discipline for most children is to create an environment in which children know what your expectations are, and then live consistently in that way. Nagging youngsters doesn't work; they come to expect you to nag before they know you care about something. Yelling and screaming don't work, either; they come to think that you don't believe they can behave without being verbally assaulted. If you simply set a standard and then live as if you expect them to meet the standard, they will. When they don't, don't resort to needy negative reinforcement. Usually they know they've fallen short and strive to do better the next time.

    Teachers and Students

    Students become what their teachers expect of them, too. If you act as if they are not trustworthy, say, by creating elaborate rules for late work, cheating, and grading, they will soon look for ways to game the system. If you act as if they don't respect class time, say, by wasting it yourself through lack of preparation or rambling digression, they will soon come not to value their time in class.

    If you set a high standard and expect them to learn and achieve, they usually will. If you trust them with masterpieces, they will come to value masterpieces.

    Developers and Users

    The quote applies to all sorts of developer/user relationships. If software developers don't trust their clients, then their clients will start to look out for themselves at the developer's expense. If an API designer acts as if programmers are not smart or reasonable enough to use the API wisely, and so creates elaborate rituals to be followed to ensure that programmers are doing the right thing, then programmers will look for ways to game the API. The result is hacks that confirm the API designer's expectations.

    Agile methods place a high premium on developing a mutually beneficial relationship between the client and the programmer. The result is that programmers and clients feel free to be what they should be to one another: partners in creating something valuable.

    Managers and Team Members

    This truth is a good thing to keep in mind for someone embarking on an administrative or managerial position. When "bosses" treat their "employees" as adversaries in a competition, the employees soon become adversaries. They do so almost out of necessity, given the power imbalance that exists between the parties. But if a manager approaches the rest of the team with openness, transparency, and respect, I think that most members of the team will also respond in kind.

    Husbands and Wives

    All of the relationships considered above are hierarchical or otherwise imbalanced. What about peer relationships? I think the assertion still holds. In my many years of marriage, I've noticed that my wife and I often come to behave in the way we think our spouse expects. When one of us acts as if the other is untrustworthy, the other comes to protect his or her own interest. When one of us acts as if the other is incapable of contributing to a particular part of our lives together, the other stops caring to contribute. But when we act as if we are both intelligent, trustworthy, caring, and respectful, we receive that back from each other.

    ----

    Given its wide range of applicability, I think that the truism needs to be restated more generally. Perhaps:

    People tend to become like the kind of people you treat them to be.

    Or maybe we can restate it as a new sort of Golden Rule:

    Treat people like the kind of people you want -- or expect -- them to be.

    Or perhaps "Do unto others as you expect them to be."


    Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

    June 02, 2005 6:51 PM

    Who Says Open Source Doesn't Pay?

    Google's Summer of Code flyer

    Leave it to the guys from Google to offer the Summer of Code program for students. Complete an open-source project through one of Google's collaborators, and Google will give you a $4500 award. The collaborators range from relatively large groups such as Apache and FreeBSD, through medium-sized projects such as Subversion and Mono, down to specific software tools such as Jabber and Blender. Of course, the Perl Foundation, the Python Software Foundation, and Google itself are supporting projects. You can even work on open-source projects in Lisp for LispNYC, a Lisp advocacy group!

    The program bears a strong resemblance to the Paul Graham-led Summer Founders Program. But the Summer of Code is much less ambitious -- you don't need to launch a tech start-up; you only have to hack some code -- and so is likely to have a broader and more immediate effect on the tech world. Of course, if one of the SFP start-ups take off like Google or even ViaWeb, then the effects of the SFP could be much deeper and longer lasting.

    This is another slick move from the Google image machine. A bunch of $4500 awards are pocket change to Google, and in exchange they generate great PR and establish hefty goodwill with the open-source organizations participating.

    From my perspective, the best part of the Summer of Code is stated right on its web page: "This Summer, don't let your programming skills lie fallow...". I give this advice to students all the time, though they don't often appreciate its importance until the fall semester starts and they feel the rust in their programming joints. "Use it, or lose it" is trite but true, especially for nascent skills that are not yet fully developed or internalized. Practice, practice, practice.

    The Summer of Code is a great chance for ambitious and relatively advance students to use this summer for their own good, by digging deep into a real project and becoming better programmers. If you feel up to it, give it a try. But even if you don't, find some project to work on, even if it's just one for your amusement. Perhaps I should say especially if it's just one for your amusement -- most of the great software in this world was originally written by people who wanted the end result for themselves. Choose a project that will stretch your skills a bit; that will force you to improve in the process. Don't worry about getting stuck... This isn't for class credit, so you can take the time you need to solve problems. And, if you really get stuck, you can always e-mail your favorite professor with a question. :-)

    Oh, if you do want to take Google up on its offer, you will want to hurry. Applications are due on June 14.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    June 02, 2005 1:46 PM

    On "Devoid of Content"

    A couple of days ago, blog reader Mike McMillan sent me a link to Stanley Fish's New York Times op-ed piece, Devoid of Content. Since then, several of my CS colleagues have recommended this article. Why all the interest in an opinion piece written by an English professor?

    The composition courses that most university students take these days emphasize writing about something: current events, everyday life, or literature. But Fish's freshman composition class does something much different. He asks students to create a new language, "complete with a syntax, a lexicon, a text, rules for translating the text and strategies for teaching your language to fellow students". He argues that the best way to learn how to write is to have a deep understanding of "a single proposition: A sentence is a structure of logical relationships." His students achieve this understanding by having to design the mechanisms by which sentences represent relationships, such as tense, number, manner, mood, and agency.

    Fish stakes out a position that is out of step with contemporary academia: Learning to write is about form, not content. Content is not only the not the point; it is a dangerous delusion that prevents students from learning what they most need.

    Content is a lure and a delusion, and it should be banished from the classroom. Form is the way.

    Fish doesn't say that content isn't important, only that it's should not be the focus of learning to write. Students learn content in their science courses, their social science courses, and their humanities courses -- yes, even in their literature courses.

    (I, for one, am pleased to see Fish distinguish the goals of the composition courses taught in English departments from the goals of the literature courses taught there. Too many students lose interest in their comp courses when they are forced to write about, oh, a poem by Edna St. Vincent Millay. Just because a student doesn't connect with twentieth-century lyric poetry doesn't mean that he shouldn't or can't learn to write well.)

    So, how is Fish's argument relevant to a technical audience? If you have read my blog much, you can probably see my interest in the article. I like to read books about writing, to explore ways of writing better programs. I've also written a little about the role form plays in evaluating technical papers and unleashing creativity. On the other side of the issue, though, I have several times recently about the role of real problems and compelling examples in learning to program. My time at ChiliPLoP 2005 was spent working with friends to explore some compelling examples for CS1.

    In the context of this ongoing discussion among CS educators, one of my friends sloganized Fish's position as "It's not the application, stupid; it's the BNF."

    So, could I teach my freshman computer programming class after Fish's style? Probably not by mimicking his approach note for note, but perhaps by adopting the spirit of his approach.

    We first must recognize that freshman CS students are usually in a different intellectual state from freshman comp students. When students reach the university, they may not have studied tense and mood and number in much detail, but they do have an implicit understanding of language on which the instructor can draw. Students at my university already know English in a couple of ways. First, they speak the language well enough to participate in everyday oral discourse. Second, they know enough at least to string together words in a written form, though perhaps not well enough to please Fish or me.

    My first-year programming students usually know little or nothing about a programming language, either as a tool for simple communication or in terms of its underlying syntactic structures. When Fish's students walk into his classroom, he can immediately start a conversation with them, in a rich language they share. He can offer endless example sentences for his students to dissect, to rearrange, to understand in a new way. These sentences may be context-free, but they are sentences.

    In a first-year programming course, instructors typically have to spiral our dissection of programs with the learning of new language features and syntax. The more complex the language, the wider and longer the spiral must be.

    Using a simple computer language might make an approach like Fish's work in a CS1 course. I think of the How to Design Programs project in these terms. Scheme is simple enough syntactically that the authors can rather quickly focus on the structure of programs, much as Fish focuses on the structure of sentences. The HtDP approach emphasizes form through its use of BNF definitions and "design recipes". However, I don't get the sense that HtDP removes content from the classroom so much as it removes it from the center of attention. Felleisen et al. still try to engage their students with examples that might interest someone.

    So, I think that we may well be able to teach introductory programming in the spirit of Fish's approach. But is it a good idea? How much of the motivation to learn how to program springs from the desire to do something particular? I do not know the answer to this question, but it lies at the center of the real problems/compelling examples discussion.

    In an unexpected twist of fate, I was thumbing through Mark Guzdial's new book, Introduction to Computing and Programming with Python: A Multimedia Approach, and read the opening sentences of its preface:

    Research on computing education clearly demonstrates that one doesn't just "learn to program." One learns to program something [5,20], and the motivation to do that something can make the difference between learning and not learning to program [7].

    (The references are to papers on situated learning of the sort Seymour Papert has long advocated.)

    I certainly find myself in the compelling problems camp these days and so am heartened by Guzdial's quote, and the idea embodied in his text. But I also feel a strong pull to find ways to emphasize the forms that will help students become solid programmers. That pull is the essence of my interest in documenting elementary programming patterns and using them to gain leverage in the classroom.

    Regardless of how directly we might use Fish's approach to teach first-year courses in programming, I am intrigued by what seems to me to be a much cleaner connection between his ideas and the CS curriculum, the traditional Programming Languages course! I'll be teaching our junior/senior level course in languages this fall, and it seems that I could adopt Fish's course outline almost intact. I could walk in on Day 1 and announce that, by the end of the semester, each group of students will have created a new programming language, complete with a syntax, a set of primitive expressions, rules for translating programs, and the whole bit. Their evolving language designs would serve as the impetus for exploring the elements of language at a deeper level, touching all the traditional topics such as bindings, types, scope, control structures, subprograms, and so on. We could even implement our growing understanding in a series of increasingly complex interpreters that extract behavior from syntactic expressions.

    Actually, this isn't too far from the approach that I have used in the past, based on the textbook Essentials of Programming Languages. I'll need to teach the students some functional programming in Scheme first, but I could then turn students loose to design and implement their own languages. I could still use the EOPL-based language that I call Babel as my demonstration language in class.

    School's barely out for the summer, and I'm already jazzed by a new idea for my fall class. I hope I don't peak too soon. :-)

    As you can see, there are lots of reasons that Fish's op-ed piece has attracted the attention of CS folks. It's about language and learning to use it, which is ultimately what much of computer science and software development are about.

    Have you heard of Stanley Fish before? I first ran into him and his work when I read a blog entry by Brian Marick on the role the reader plays in how we write code and comments. Brian cited Fish's work on reader-response criticism and hypothesized an application of it to programming. You may have encountered Fish through Brian's article, too, if you've read one of my oldest blog entries. I finally checked the book by Fish that Brian recommended so long ago out of the library today -- along with another Fish book, The Trouble with Principle, which pads my league-leading millions total. (This book is just for kicks.)


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    May 31, 2005 5:50 PM

    Agile Moments from Primitive Obsessions

    [ See what happens when I start talking? I can't shut up. :-) ]

    My previous entry brought to mind two related ideas. Think of these as more Agile Moments.

    The Delgado Codex

    I ran across a neat little example of reflective practice on the major league baseball diamond in The Scholarly Rigor of Carlos Delgado. Carlos is the first baseman for the Florida Marlins in U.S. baseball's National League. He apparently has the agile habit of recording detailed information about every one of his at-bats in a notebook he keeps with him in the dugout. By collecting this data, he is able to derive feedback from his results and use that to improve his future at-bats.

    As the article points out, most professional sports teams -- at least in America -- record all of their performances these days and then mine the film for information they can use to do better next time out. Delgado, "like a medieval Benedictine at Monte Cassino", is one of the few major leaguers who still use the ancient technologies of pencil and paper to personally track his own performances. The act of writing is valuable on its own, but I think that as important is the fact that Delgado reflects on his performance immediately after the event, when the physical sensations of the performance are still fresh and accessible to his conscious mind.

    How is this related to my previous post? I think that we programmers can benefit from such a habit. If we recorded the smells that underlay our refactorings for a month or even a week, we would all probably have a much better sense of our own tendencies as programmers, which we could then feed back into our next code. And, if we shared our experiences, we might develop an even more comprehensive catalog of smells and refactorings as a community. If it's good enough for Donald Knuth, it ought to work for me, too.

    Agility works for Delgado, one of the better offensive players in all of baseball. Knowing about his habit, I'm even happier to have him as a member of my rotisserie baseball team, the Ice Nine. :-)

    Intentional Duplication

    The Zen Refactoring thread on the XP mailing list eventually came around to the idea deliberately creating code duplication. The idea is this: It is easier first to write code that duplicates other code and then to factor the duplication out later than it is to write clean code first or to refactor first and then add the new code.

    I operate in this way most of the time. It allows me to add a new feature to my code immediately, with as little work as possible, without speculating about duplication that might occur in the future. Once I see the actual duplication, I make it go away. Copy-and-paste can be my friend.

    This technique is one way that you can refactor your code toward suitable domain abstractions away from primitive data. When you run into a situation where you find yourself handling a tolerance in multiple places, Extract Class does the trick. This isn't a foolproof approach, though, as Chris Wheeler pointed out in his article. What happens when you have references to mass-as-an-int in zillions of places and only then does the client say, "Um, we allow tolerances of 0.1g"? Good design and good refactoring tools are your friend then.

    In the same thread, Ron Jeffries commented:

    > I have also sometimes created the duplication, and then
    > worked to make the duplicated code as similar as possible
    > before removing the duplication. Does anyone else do this?

    I once saw Kent Beck do that in a most amazing way, but I haven't learned the trick of making code look similar prior to removing duplication; would love to see an example.

    Now I have fodder for a new blog entry: to write up a simple example of this technique that I use in my freshman-level OOP programming course. It's a wonderful example of how a little refactoring can make a nice design fall out naturally.

    How is this related to my previous post? Duplication is one of my pet code smells, though I often create it willingly, with the intention of immediately factoring it out. Like Primitive Obsession, though you have to learn to strike a proper balance between too little and too much. Just right is hard to find sometimes.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    May 31, 2005 4:58 PM

    Primitive Obsession and Balance

    Recently, Chris Wheeler posted a thoughtful blog entry called My Favourite Smells, which described only one smell, but one every OOP programmer knows deep in his soul: using a primitive data type instead of a domain-specific type. James Shore calls this smell Primitive Obsession, and Kevin Rutherford calls it Primitive Feature Envy.

    Chris has been justifiably lauded for starting a conversation on this topic. I find this smell in my programs and even in many of the programs I read from good OOP programmers. When programmers are first learning to write object-oriented programs, the tendency to write code in terms only of the primitive data types is hard to shake. Who needs that Piece class, when a simple integer flag taking values from 1 to 4 will do just fine? (Heck, we can always use Hungarian Notation to make the semantics clear. :-) When students come to see the value of that Piece class, they begin to see the value of OOP.

    That said, we have to be careful to remember that this smell is not an absolute indicator. There is an inherent tension between the desire to create classes to model the domain more closely and the desire to do the simplest thing that could possibly work. If I try to wrap every piece of primitive data in its own class, I can very quickly create a lot of unnecessary code that makes my program worse, not better. My program looks like it has levels of abstraction that simply aren't there.

    That's not the sort of thing that Chris, James, and Kevin are advocating, of course. But we need to learn how to strike the proper balance between Primitive Obsession and Class Obsession, to find the abstractions that make up our domain and implement them as such. I think that this is one of the reasons that Eric Evans' book Domain-Driven Design is so important: it goes a long way toward teaching us how to make these choices. Ultimately, though, you can only grok this idea through experience, which usually means doing it wrong many times in both directions so that your mind learns the patterns of good designs. As Chris', James', and Kevin's articles all point out, most of us start with a predilection to err on the side of primitives.

    One way to push yourself to learn this balance is to use the Three Bears pattern first described by Kent Beck to create situations in which you confront the consequences of going too far one way or the other. I think that this can be another one of those programming etudes that help you become a better programmer, a lá the Polymorphism Challenge that Joe Bergin and I used at SIGCSE 2005, in which we had folks rewrite some standard programs using few or zero if statements.

    In order to feel the kind of effects that Chris describes in his article, you have to live with your code for a while, until the specification changes and you need that tolerance around your masses. I think that the folks running the ICFP 2005 programming contest are using a wonderful mechanism to gauge a program's ability to respond to change. They plan to collect the contestants' submissions and then, a couple of weeks later, introduce a change in the spec and require the contestants to adapt their programs on an even shorter deadline. What a wonderful idea! This might be a nice way to help students learn the value of adaptable code. Certainly, Primitive Obsession is not much of a problem if you never encounter the need for an abstraction.

    Ron Jeffries posted a message to the XP discussion list earlier today confessing that one of his weaknesses as a programmer is a tendency to create small classes too quickly. As an old Smalltalker, he may be on the opposite end of the continuum from most folks. But we need a good name for the smell that permeates his code, too. My "Class Obsession" from above is rather pedestrian. Do you have better name? Please share... Whatever the name, I suspect that Ron's recognition of his tendency makes it easier for him to combat. At least he knows to pay attention.

    Chris's article starts with the idea of favorite smells but then settles on one. It occurs to me that I should tell you my personal favorite smell. I suspect that it relates to duplication; my students certainly hear that mantra from me all the time. I will have to think a bit...


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    May 25, 2005 1:37 PM

    Waiting

    Vaclav Havel

    Last weekend, while my daughter was doing a final practice for her Suzuki Book I recital, I picked Vaclav Havel's The Art of the Impossible: Politics as Morality in Practice off the piano teacher's bookshelf for a little reading. This is a collection of speeches and short essays that Havel in the first half of the 1990s about his role as dissident, reformer, and president of the Czech Republic. He is, of course, famous as a poet, and his writing and speaking style have a poet's flare.

    I ended up spending most of my time with Havel's speech to the Academy of Humanities and Political Sciences in Paris on October 27, 1992. (I just noticed the date -- that's my birthday!) This speech discussed the different forms of waiting.

    Vladimir and Estragon in Waiting for Godot

    The first kind is sometimes characterized as waiting for Godot, after the absurdist play by Samuel Beckett. In this form, people wait for some sort of universal salvation. They have no real hope that life will get better, so they hold tightly to an irrational illusion of hope. Havel says that, for much of the 20th century, residents of the former communist world waited for Godot.

    At the opposite end of the waiting spectrum lies patience. Havel describes patience as waiting out of principle -- doing the right thing because it is right, not out of any expectation of immediate satisfaction. In this sense, patience is "waiting as a state of hope, not as an expression of hopelessness". Havel believes that the dissidents who ultimately toppled the communist regimes behind the Iron Curtain practiced this sort of waiting.

    When the curtain fell and the people of, say, Czechoslovakia took their first unsteady steps into the light of the western world, folks practicing the different forms of waiting encountered distinctive problems. Those who had been waiting for Godot felt adrift in a complex world unlike anything they had known or expected. They had to learn how to hope and to be free again.

    You might think that the patient dissidents would have adjusted better, but they faced an unexpected problem. They had hoped for and imagined a free world around them, but when they became free things didn't change fast enough. Havel relates his own struggles at being too idealistic and now impatient with the rate at which the Czech and Slovak republics assumed the mantel of democratic responsibility. Like many revolutionaries, he was criticized as out of his element in the new world, that he was most effective in the role of dissident but ineffective in the role of democratic leader.

    What struck me most about this essay came next. Havel recognized the problem: He had waited patiently as a dissident because he had no control over how anyone but himself behaved. Now that the huge impediment of an authoritarian regime had been surmounted, he found that he had become impatient for all the details of a democratic system to fall into place. He no longer waited well.

    In short, I thought time belonged to me. ...

    The world, Being, and history have their own time. We can, of course, enter that time in a creative way, but none of us has it entirely in his hands. The world and Being do heed the commands of the technologist or the technocrat....

    In his own transition from dissident to democratic leader, Havel learned again that he had to wait patiently as the world takes its "own meandering course". He asserts that the "postmodern politician" must learn waiting as patience -- a waiting founded on a deep respect for the world and its sense of time. Read:

    His actions cannot derive from impersonal analysis; they must come out of a personal point of view, which cannot be based on a sense of superiority but must spring from humility.

    When the world changed, even in the way for which he had been working, Havel had to learn again how to be patient.

    I think that the art of waiting is something that has to be learned. We must patiently plant the seeds and water the ground well, and give the plants exactly the amount of time they need to mature.

    Just as we cannot fool a plant, we cannot fool history.

    I think that 'waiting patiently as the world takes its own meandering course' translates into showing respect for people and the rate at which they can assimilate new ideas and change their behavior.

    Perhaps this speech affected me as it did because I am now thinking about leading my department. I certainly do not face a situation quite as extreme as Havel did when the communist regime fell in Czechoslovakia, yet I am in a situation where people do not trust the future as much as I'd like, and I need to find a way to help my department move in that direction. As Havel reminds me, I cannot move the department myself; I can only patiently plant the seeds of trust, water the ground well, and give the plants the time they need to grow.

    The value of this sort of waiting is not limited to the world of administration. Good instructors need to wait patiently, working with students to create the atmosphere in which they can grow and then giving them time and opportunity to do so.

    I also think that this sort of waiting holds great value in the world of software development. Agile methods are often characterized by folks in the traditional software engineering world as impatient in their desire to get to code sooner. But I think the opposite is true -- the agile methods are all about patience: waiting to write a piece of code until you really know what it should do, and waiting to design the whole program until you understand the world well enough to do it right. In this sense, traditional software engineering is the impatient approach, telling us to presumptuously design grand solutions to force the world to follow our senses of direction and time. The worlds in which most programs live are too complex for such hubris.

    I cannot resist closing with one last quote from the rich language of Havel himself:

    If we are certain that the planting was good and that we are watering regularly, we have no reason to be impatient. It is enough to realize that our waiting has meaning.

    Waiting that has meaning because it grows out of hope and not hopelessness, from faith and not despair, from humility toward the time of the world and not out of fear of its sublime tranquility, is accompanied not by boredom but by suspense. This kind of waiting is more than just waiting.

    It is life. Life as the joyous involvement in the miracle of Being.

    That sounds like a poet speaking, but it could be a programmer. And maybe the world would be a better place if all administrators thought this way. :-)


    Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Software Development, Teaching and Learning

    May 23, 2005 1:50 PM

    Trusting Students with Hidden Masterpieces

    When I first started thinking about applying for the position of department head, a former student asked me how I give up time from my classroom teaching, which is where, from his point of view, the best part of an academic's life happens: the A-ha! moment, when a student suddenly gets it. This particular student likes to teach just for those moments.

    I like the A-ha! moments, too, and will miss having more opportunities to experience them. (For the next few semesters, I will be teaching one course each term rather than three.) Sometimes, though, other things are more important, and I think this is one of those times.

    A-ha! moments are wonderful, but there is another kind of moment that I find as gratifying if not more. The movie The English Patient captures one of these moments perfectly; writer Ron Rolheiser describes just how in one of his essays:

    In the movie, The English Patient, there's a wonderful scene, stunning in its lesson:

    A number of people from various countries are thrown together by circumstance in an abandoned villa in post-war Italy. Among them are a young nurse, attending to an English pilot who's been badly burned in an air-crash, and a young Asian man whose job it is to find and defuse land-mines. The young man and the nurse become friends and, one day, he announces he has a special surprise for her.

    He takes her to an abandoned church within which he has set up a series of ropes and pulleys that will lift her to the ceiling where, hidden in darkness, there are some beautiful mosaics and other wonderful works of art that cannot be seen from the floor. He gives her a torch as a light and pulls her up through a series of ropes so that she swings, almost like an angel with wings, high above the floor and is able to shine her torch on a number of beautiful masterpieces hidden in the dark.

    The experience is that of sheer exhilaration; she has the sensation of flying and of seeing wonderful beauty all at the same time. When she's finally lowered back to the floor she's flushed with excitement and gratitude and covers the young man's face with kisses, saying over and over again: "Thank you, thank you, thank for showing this to me!"

    And, from her expression, you know she's saying thank you for two things: "Thank you for showing me something that I could never have come to on my own; and, thank you for trusting me enough to think that I would understand this, that I would get it!"

    I'm not in teaching for the kisses (if I were, then I would be judged a horrible failure), but this is the sort of moment that touches me deeply in teaching. Something like this doesn't happen all that often, but it happens as often as the revered A-ha! moment. And I think it is a better reflection of how a student-teacher relationship should be viewed. The best part of teaching lies not in giving the student new knowledge, or even helping the student to understand something complex or difficult -- though that's important. The best part is in helping students go places and see things that they wouldn't have gone and seen otherwise.

    I think that this is possible more often than we sometimes let ourselves believe. The key lies in the second part of the nurse's thank-you in the movie: We have to trust our students and apprentices to "get it", to appreciate beauty and elegance. We do more damage to learners by not trusting them than by explaining poorly of expecting too much.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    May 16, 2005 3:41 PM

    Start With A Box

    before you can think out of the box,
    you have to start with a box
    -- Twyla Tharp

    I've been reading Twyla Tharp's The Creative Habit, and that quote is the title of Chapter 5. As soon as I read it, I scribbled it down immediately on the note card I had tucked in the back of the book for just such occasions. The idea of "starting with a box" struck me in a particular way, and as I read the chapter I went through a journey from thinking that Tharp meant something else to thinking that, ultimately, we were onto the same idea. For Tharp, "the box" is as organizational system, a way of storing and managing her memory as she works on a new creative project. But the box is more than that -- it is really about preparation.

    Bobby Knight, the famous US college basketball coach, once said something along the lines, "The will to win is the most overrated desire in all of sports. Everybody has the will to win. What separates those who succeed is the will to prepare to win."

    I often encounter students who say that they can be successful contributors in the software industry despite their unwillingness to work hard to become really good programmers -- because they will be part of a team in industry. I guess they figure that some other member of the team is going to do the heavy lifting. My response to them is always the same: Why would a team want you as a member if you can't help with the most important product it produces? That's awfully presumptuous of you. For some folks, the desire not to program is borne out of a lack of interest in programming. But for many it grows out of a lack of success, and a consequent unwillingness to do the hard work needed to prepare for programming tasks.

    Most people can tell when they are unprepared for a job, though our egos sometimes hide the truth from us. Introductory students often come to my office with questions about a programming assignment, saying, "I know just what I want to do, but when I try to write the program I just can't say it in Java." (In earlier years, it was C++, or Pascal.) Again, my response is often the same: You don't really know what you want to say, or the program would come more easily. Tharp says so, too, using a journalist as her example:

    If his reporting is good, the writing will reflect that. It will come out quickly and clearly. If the reporting is shoddy, the writing will be, too. It will be torture to get the words out.

    Some may think that this all sounds bad for agile methodologies, which de-emphasize months of on-project preparation of requirements and design. But I think that this line of thought betrays a misunderstanding of agile methods, one reflected in the idea that they are reckless. Unprepared developers cannot write good software using an agile method. Agility doesn't mean not being prepared; it means letting your preparation work in concert with concrete steps forward to prepare you even better for what comes next.

    One thing I like about Tharp's chapter is that she doesn't stop at "be prepared". She reminds us that preparation is only preparation.

    The box is not a substitute for creating. The box doesn't compose or write a poem or create a dance step. The box is the raw index of your preparation. It is the repository of your creative potential, but it is not that potential realized.

    You still have to create! You have to write code. You have to pull your head out of design-land, out of the code library, out of the archives and actually begin writing code.

    Sadly, some people never get beyond the box stage in their creative life. We all know people who have announced that they've started work on a project ... but some time passes, and when you politely ask how it's going, they tell you that they're still researching. ... Maybe they like the comfort zone of research as opposed to the hard work of writing. ... All I know for sure is that they are trapped in the box.

    Some students do this, too. They work hard -- they really do -- but when the assignment comes due, there is no program to turn in; they never got around to actually producing something. And the program is the whole point of the assignment. Okay, okay, learning is probably the real point of the assignment, but you can't learn what you really need to learn until you write the program!

    This is where we can see a strong connection to the agile methods. They encourage us to be in a continuous state of writing small bits of code, testing and designing, cycling quickly between planning and producing.

    Tharp ends her chapters with exercises for the reader, activities that can help readers work on the idea they've just read about. The section at the end of "start with a box" consists of a single exercise, and it's not especially concrete,. But it does have an agile feel to it. When you are having a hard time getting out of the box, take a big leap somewhere in the project. Don't worry about the beginning or the end; just pick something, an especially salient point (or user story) and get to work. You never know where it will take you.

    "Getting out of the box" was one of the primary motivations for me starting my blog. I had boxes and boxes of stuff (metaphorical boxes, of course -- mine were either computer files or file folders stuffed with hastily written notes). But I had stopped taking the risk of putting my thoughts out in the public space. In that regard, this blog has been a huge success for me.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    April 29, 2005 5:33 PM

    The End of the End of the Semester

    Something I plan never to say in class again:

    If this were a real program...

    ... because when I say it, it is almost always code for "I haven't bothered to show you a complete and useful program in which this idea (or pattern or construct) actually matters." Usually, the students and I both deserve better.

    Of course, it's easy for me to say this on the last day of classes before summer break. Let's see how well my resolve holds up in the heat of classes next fall.

    And for those of you who don't teach university courses, here is what most professors are thinking on the last day of classes before summer break:

    Wheeeeeeeeeeeeeeeeeeeeeee!

    Some of us even do the Dance of Joy (courtesy of camp '80s TV show Perfect Strangers).


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    April 26, 2005 5:40 PM

    Importing Language Features

    While doing some link surfing yesterday, I ran across an old blog entry that has a neat programming languages idea in it: Wouldn't it be nice if we could encapsulate language features.

    The essay considers the difference between two kinds of complexity we encounter in programs. One is complexity in the language features themselves. First-class closures or call-with-current-continuation are examples. Just having them in a language seems to complicate matters, because then we feel a need to teach people to use them. Even if we don't, some programmer may stumble across them, try to use them, and shoot himself in the foot. Such abstractions are sometimes more than the so-called ordinary programmer needs.

    Another kind of complexity comes from the code we write. We build a library of functions, a package of classes, or a framework. These abstractions can also be difficult to use, or it may be difficult to understand their inner workings. Yet far fewer people complain about a language having too many libraries. (*)

    Why? Because we can hide details in libraries, in two ways. First, in order to use Java's HashMap class, I must import java.util.HashMap explicitly. Second, once I have imported the class, I don't really need to know anything about the inner workings of the class or its package. The class exposes only a set of public methods for my use. I can write some pretty sophisticated code before I need to delve into the details of the class.

    Alexander asks the natural question: Why can't we encapsulate language features in a similar way?

    Following his example, suppose that Sun adds operator overloading to Java but doesn't want every programmer to have to deal with it. I could write a package that uses it and then add a new sort of directive at the top of my source file:

    exposeFeature operatorOverloading;

    Then, if other programmers wanted to use my package, they would have to import that feature into their programs:

    importFeature operatorOverloading;

    Such an information-hiding mechanism might make adding more powerful features to a language less onerous on everyday programmers, and thus more attractive to language designers. We might even so languages grow in different and more interesting ways.

    Allowing us to reveal complex language features incrementally would also change the way we teach and write about programming. I am reminded of the concept of "language levels" found in Dr. Scheme (and now in Dr. Java). But the idea of the programmer controlling the exposure of his code to individual language features seems to add a new dimension of power -- and fun -- to the mix.

    More grist for my Programming Languages and Compilers courses next year...

    ----

    (*) Well, unless we want to use the language to teach introductory CS courses, of course.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    April 20, 2005 11:16 AM

    At the End of an Empty Office Hour

    A while back on the mailing list for the textbook Artificial Intelligence: A Modern Approach, co-author Peter Norvig wrote:

    A professor who shall remain anonymous once thanked me for including him in the bibliography of AIMA. I told him that his work was seminal to the field, so of course I included it. "Yeah, yeah, I don't care about that," he said, "what I care about is that students look in the back of the book to see if my name is there, and if it is they think I'm important and they don't bother me in office hours as much."

    I am not cited in AIMA, but whatever I am doing to scare them off must be working.

    By the way, Norvig also wrote one of my favorite books on programming, Paradigms of AI Programming. Nominally, this book teaches AI programming in Lisp, but really it teaches the reader how to program, period. Norvig re-implements many of the classic AI programs, such as Eliza, GPS, and Danny Bobrow's Student, showing design and implementation decisions along the way. His use of case studies provides a lot of context in which to learn about building a programming.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    April 14, 2005 6:00 PM

    Accountability in the Classroom

    I sometimes like to think about ways in which learning to program is like learning to speak a foreign language. Usually, I focus on similarities between the two, to see whether I can use a correspondence to improve how I present an idea. In the middle of class this morning, a difference between the two occurred to me, and now I wonder how I can use this idea to improve my courses on a daily basis.

    The difference: Students of foreign language are more easily, more obviously, and more naturally held accountable for their level of preparation and their current state of proficiency than are students of computer programming.

    There has been a lot of discussion recently on the XP discussion list about the idea of accountability. Apparently, this concept is a driving force behind the "new P" described in Kent Beck's second edition of XP Explained. (I haven't had a chance to read it yet.) Much of the discussion concerns just what the word means. For me, Merriam-Webster's definition seems clear enough:

    ... an obligation or willingness to accept responsibility or to account for one's actions

    Am I obliged to account for my actions and take responsibility for them? Then I am being held accountable. Am I willing to account for my actions and take responsibility for them? Then I am being accountable. (*)

    My compiler holds me accountable for the correctness of the code I write. Each time I compile, I find out if my program is syntactically correct. My unit tests hold me accountable for the correctness of the code I write. Each time I run my tests, the green bar tells me that my program is functionally correct -- or the red bar tells me otherwise.

    Of course, I have to compile my program (at least once :-), but I am not obliged to write and run tests. One of the beauties of the agile programming practices is their demonstration of programmers' willingness to be held accountable for their time and their efforts. Obligation is supplanted by willingness, which opens the programmer to a new level of growth and performance.

    The compiler holds students accountable, too. As they learn and use new ideas, the compiler and their testing give them a measure of their accomplishments. So, the more practice they get -- the more code they write, the more projects they do -- the more feedback they get about their level of proficiency.

    In a typical computer science course, the instructor has only a small number of opportunities to gauge each student's development. In my most programming-intensive courses, I ask students to write only 12 programs for evaluation. That is quite a lot in a fifteen-week semester, but it's not enough. I wish that I could interact with each student every day, gauging preparation and proficiency, folding what I learn back into my instruction.

    What do I do now? I give students in-class exercises and discuss solutions every day. But some students work on the exercises only half-heartedly, if only because the absence of a keyboard and a compiler makes writing much code tedious. The discussion usually goes pretty well, but only a small subset of the students tend to participate actively. In class this morning, the inadequacy of my seemed especially obvious.

    So, my mind wanders... How would a course in spoken German or French differ? My in-class exercises and discussions pale in comparison to what a foreign language teacher can do so naturally: start a conversation with a student! A classroom discussion can grow quite easily to include many students, because each interaction with a student exposes the student's level of preparation and proficiency. Human conversation works that way.

    I can draw students into classroom discussions, but there is a big difference between writing code and talking about code, even talking about writing code. Someone can talk about code, or the ideas underlying code, even if they have difficulty writing the same.

    Students who are willing to account for their work sometimes find that they are not asked to. Students who need to be obliged to account for their work -- who would benefit greatly from being held more accountable -- come to count on not being so held.

    This line of thought was triggered today by my recognition that a couple of students came to class unprepared this morning. A couple said so, honest that they hadn't studied the reading assignment yet wanting to ask questions. A couple tried to blend in, relying on the fact that they would probably manage to get by. I wasn't in the mood to call them to account, but I was in the mood to think about the attitude itself.

    Owen Astrachan uses a great strategy for driving classroom interaction that has a side effect of holding students accountable for the time they spend in class on exercises. He passes blank transparencies out to groups of students working together on an exercise. They write their code on the transparencies. Finally Owen collects them and puts them on the projector for everyone to see and discuss. I don't know how often he makes the authors of a slide known to the class, or if he ever does In either case, I think that this strategy creates an environment in which students feel accountable.

    Some students are intrinsically motivated to learn. They hold themselves accountable no matter what the instructor or other students do. But some university students are not quite ready for this level of autonomy. Some instructors and universities seem to adopt an attitude of sink-or-swim, leaving it to students to figure out that they have to take control of their own learning. In some contexts, this is the right thing to do. Ultimately, each student is responsible for his or her own learning, accountable only to themselves.

    The role of a teacher, though, is more I think. Especially when working with children and even university freshmen, a teacher should help students learn to hold themselves accountable. I'd like to be able to readily recognize students who are struggling with material or not doing the work so that I can intervene. My German teachers could readily assess my level of preparation and proficiency by walking into the room, saying, "Guten Tag, Eugen! Wie geht's?" -- and then simply listening. Strategies like Owen's may be the best we can do in computer science, so we need to share them when we have them.

    When you have to write your code on a slide and give it to the professor for possible display in front of the whole class, you have a built-in accountability partner. The success of groups such as Alcoholics Anonymous is founded in large part on relationships in which one person helps another to hold himself accountable. Sometimes, just telling someone else that you are quitting smoking or trying to curb your profanity can be enough external support to do a better job. Who wants to disappoint a friend, or look weak? You might even tell the Internet.

    When a student signs up for a formal course in a topic, one element of the action is to put themselves in a formal accountability relationship. The teacher and the classmates act as accountability partners. Obviously, this isn't the only responsibility of a teacher, nor even the most important, but it is a good part of the deal for the learner.

    This is wandered a bit from thought that triggered all this, which was something like, "If a student went to a French class as unprepared as some students come to their programming classes, they would be found out quickly. That would straighten them up!" (In moments of weakness, I sometimes surrender to temptation.) But ultimately I am motivated by a desire to do a better job as a teacher.

    It occurs to me that I have written a similar entry before...

    ----

    (*) I've not gotten into the mailing list discussion much, but this seems to be what Kent asks of himself and other programmers. It seems pretty reasonable to me, and at the heart of many agile development practices.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    April 08, 2005 5:45 PM

    On Introducing Agile Methods to Programmers

    After teaching Agile Software Development to university juniors and seniors last semester for the second time, and introducing test-driven design early in CS II this semester, I am coming to a deeper appreciation for how much agile methods require a change in deeply-rooted habits.

    It is easy for a person who have already developed a new habit (read: me) to walk into classroom and make assumptions about what will motivate students to change their habits. It doesn't take long before new practices become uncomfortable enough that a learner prefers to drop back to what she knows best. In CS II, students soon bump into objects that are hard for them to test, file processors and GUIs among them. And, as Bill Caputo points out, the benefits of TDD don't just happen, "you have to want to find a way to structure the code so that it can be tested without resorting to resources beyond the test." That can be hard for any programmer learning to do TDD, and it's harder for students who are still learning how to program at all.

    I've been thinking been thinking about how to introduce these new ideas more effectively in the classroom. We university educators have something to learn from industry trainers who introduce agile methods to their clients. Brian Marick's recent article describes advice he gives to clients who ask him for help. Some of the details don't apply to my usual situation, such as "read Michael Feathers' wonderful Working Effectively with Legacy Code". Not that the book isn't wonderful -- it is! It's just that my students aren't usually ready for this book yet. But the more general advice -- work on developing habits deliberately, focus on test writing before refactoring, encourage collective learning and sharing -- can help me. I need to find ways to implement this advice in my context.

    I suppose that my experience as an agile methods actor tells me that I have something to learn from this book, too. But my students aren't always fearless, and neither am I.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    April 08, 2005 9:36 AM

    Techniques for Indirect Learning

    Writing my previous entry on designing the negative space around software reminded me of a couple of related techniques for learning. Like the negative space idea, both draw on ideas from other disciplines.

    Averted Vision

    Brian Marick once wrote about the technique known as "averted vision" in astronomy. It turns out that we can only see some objects if we don't look directly at them. Rather than looking directly at a faint object in the sky, astronomers look a little to the side of the object and find that they can see an otherwise invisible object in their peripheral vision. This bit of magic follows directly from the structure of the human eye: while the center of the eye needs relatively bright light in order to focus on an object, the outer portions of our eye require less light to focus on the same object.

    Brian reported success in applying this technique to learning a new topic. Rather attacking the topic head on, he sometimes goes near it, studying its periphery and in particular its effects on other topics and ideas. I think this bit of magic follows directly from how the human mind works. Without much prompting, it tries to build a picture of what is happening in front of it, even when the particular thing isn't expressed or seen implicitly. Then, once our minds has constructed a topic in terms of its boundaries and effects, we can more effectively go into the heart of the topic. Indeed, our understanding of it may be stronger for having first situated it in its context.

    As one of my readers pointed out in response to the negative space piece, this is something that good writers take advantage of when describing a situation. Sometimes, you are better off saying less about something and letting the reader create her own image of it from what surrounds it. The image will be more vivid -- and the writing less blunt, more artful.

    I have lost my link to Brian's discussion of this. If anyone has it, please let me know.

    The Unsharp Mask Algorithm

    Andy Hunt describes how to apply the Unsharp Mask algorithm when learning. Unsharp Mask comes from the world of image-processing programs such as Photoshop, in which it is used to make images sharper. It does so by first blurring the image. You might not think this could work, but it does. By taking the image out of focus, the software softens lines that may be the side effect of noise or other anomalies. When the program tries to focus the image again, we sometimes discover details and boundaries that were not obvious at all before.

    Andy says, "I like to try to do something similar when faced with a sticky problem." He does so by taking his mind off the problem, doing something completely unrelated. That allows his mind to blur the edges of the problem for him, so that when he comes back to it he has to re-focus. In doing so, he may see something that he was missing before.

    I imagine that we all have this pattern. I also wonder if there might not be a more direct application of the pattern to sticky problems. Rather than going off for a walk or to wash dishes, maybe we could continue working -- but by stepping back from the problem to a larger context. From a distance, the edges of our problem begin to blur. They don't look quite as imposing. After thinking about the larger context for a while, we can slowly focus back in on the tough problem. As we return, we may re-form those edges in a different way, helping us to see finer details than we did before.

    That's all speculation, for now. I now go off in search for a problem to try it out on. :-)


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    April 07, 2005 7:49 AM

    Software in Negative Space

    Drawing on the Right Side of the Brain

    I started college as an architecture major. In the first year, architecture students took two courses each semester: Studio and Design Communications Media. The latter had a lofty title, but it focused on the most basic drawing skills needed in order to communicate ideas visually in architecture. At the time, we all considered it a freehand art class, and on the surface it was. But even then I realized that I was learning more.

    The textbook for DCM was Betty Edwards's Drawing on the Right Side of the Brain. It is about more than drawing as a skill; it is also about drawing out one's creativity. Studying this book, for the first time I realized that what we do in life depends intimately on what we see.

    Often, when people try to draw a common object, such as a chair, they don't really draw the chair in front of them but rather some idealized form, a Chair, seen in their mind's eye. This chair is like a Platonic ideal, an exemplar, that represents the idea of a chair but is like no chair in particular. When the mind focuses on this ideal chair, it stops seeing the real chair in front of it, and the hands go onto cruise control drawing the ideal.

    The great wonder in Edwards's book was that I could learn to see the objects in front of me. And, in doing so, I could learn to see things differently.

    Drawing on the Right Side of the Brain introduces a number of techniques for seeing an object, for the first time or in a new way, with exercises aimed at translating this new vision into more realistic depictions of those items on paper or canvas.

    a freehand rendition of the Japanese character 'ma'

    I recently found myself thinking again about one of the techniques that Edwards taught me, in the realm of software. Alan Kay has often said that we computer computer scientists focus so intently on the objects in our object-oriented programming that we miss something much more important: the space between the objects. He speaks of the Japanese word ma, which can refer to the "interstitial tissue" in the web of relationships that make up a complex system. On this view, the greater value in creating software lies in getting the ma right.

    This reminds me of the idea of negative space discussed in Edwards's book. One of her exercises asked the student to draw a chair. But, rather than trying to draw the chair itself, the student is to draw the space around the chair. You know, that little area hemmed in between the legs of the chair and the floor; the space between the bottom of the chair's back and its seat; and the space that is the rest of the room around the chair. In focusing on these spaces, I had to actually look at the space, because I don't have an image in my brain of an idealized space between the bottom of the chair's back and its seat. I had to look at the angles, and the shading, and that flaw in the seat fabric that makes the space seem a little ragged.

    In a sense, the negative space technique is merely a way to trick one's mind into paying attention to the world in a situation when it would prefer to lazily haul out a stock "kitchen chair" image from its vault and send it to the fingers for drawing. But the trick works! I found myself able to draw much more convincing likenesses than I ever could before. And, if we trick ourselves repeatedly, we soon form a new habit for paying attention to the world.

    This technique applies beyond the task of drawing, though. For example, it proves quite useful in communicating more effectively. Often, what isn't said is more important than what is said. The communication lies in the idea-space around the words spoken, its meaning borne out in the phrasing and intonation.

    The trigger for this line of thought was my reading an entry in Brad Appleton's blog:

    ... the biggest thing [about software design] that I learned from [my undergraduate experience] was the importance of what I believe Christopher Alexander calls "negative space", only for software architecture. I glibly summarized it as

    There is no CODE that is more flexible than NO Code!

    The "secret" to good software design wasn't in knowing what to put into the code; it was in knowing what to leave OUT! It was in recognizing where the hard-spots and soft-spots were, and knowing where to leave space/room rather than trying to cram in more design.

    Software designers could use this idea in different ways. Brad looks at the level of design and code: Leave room in a design, rather than overspecifying every behavior and entity that a program may need. But this sense of negative space is about what to leave out, not what to put in. The big surprise I had when using Edwards's negative space technique was that it helped me put the right things into my drawings -- by focusing on their complement.

    I often think that the Decorator design pattern embodies negative space: Rather than designing new classes for each orthogonal behavior, define objects as behaviors with receptacles into which we can plug other objects. The negative space in a Decorator is what makes the Decorator powerful; it leaves as many details uncommitted as possible while still defining a meaningful behavior. I suppose that the Strategy pattern does the same sort of thing, turned inside out.

    Maybe we can take this idea farther. What would it be like to design a program not as a set of objects, or a set of functions, but as Kay's ma? Rather than design actors, design the spaces in between them. Interfaces are a simple form of this, but I think that there is something deeper here. What if all we defined were an amorphous network of messages which some nebulous agents were able to snatch and answer? Blackboard architectures, once quite common in artificial intelligence, work something like this, but the focus there is still on creating knowledge sources, not their interactions. (Indeed, that was the whole point!)

    Even crazier: what it would be like to design software not by focusing on the requirements that our users give us, but on the "negative space" around them? Instead of adding stuff to a concoction that we build, we could carve away the unwanted stuff from a block of software, the way a sculptor creates a statue. What would be our initial block of granite or marble? What would the carving feel like?

    Whether any of these farfetched ideas bears fruit, thinking about them might be worthwhile. If Alan Kay is right, then we need to think about them.

    Edwards's negative space technique makes for a powerful thinking strategy. Like any good metaphor, it helps us to ask different questions, ones that can help us to expose our preconceptions and subconscious biases.

    And it could be of practical value to software designers, too. The next time you are stumped by a design problem, focus on the negative space around the thing you are building. What do you see?


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    April 04, 2005 3:17 PM

    Sticking to the Textbook

    On the flight home from my ChiliPLoP hot topic, I thought of a possible instance of creativity in sticking to a form, an unusual one in the realm of teaching. I wonder if I would be more creative when teaching a course if I subordinated my creative impulses to the design of someone's textbook.

    When I first started teaching as a graduate student, I followed the assigned textbook rather closely. I remember clearly a comment a student made in my course evaluation one semester: "This is a great course to take at 2:00 PM in the spring. The instructor follows the textbook, so I can skip class on Friday and know just what he will cover." Usually, I was teaching one section of a multiple-section course with real faculty teaching the other sections. Between having little experience for deviating from the book and not wanting to cause a problem in the department, sticking to the book seemed like the Right Thing to do.

    As I taught more and became 'real faculty' myself, I found myself deviating more and more. New ideas, new examples, new order of presentation -- all entered my mind and flowed out into my courses. Students had the textbook to read, for different coverage of the material, if not more complete. And I got to do more of my own thing.

    These days, I tend to do my own thing all the time.

    For example: In Programming Languages, I loved the approach of Essentials of Programming Languages, but my students had no background in Scheme coming into the course, so we didn't get very deep into the book. As my lecture notes became quite extensive, and deviated from the book in places, I found the book to be less and less help. Eventually, I just jettisoned it.

    I just started teaching Algorithms last fall and so stuck with textbook the previous instructor had used, Anany Levitin's The Design and Analysis of Algorithms. My lecture notes are not yet nearly complete enough to replace a textbook, so I continue to assign substantial readings from the book. But I have several units during the semester that are do-my-own-thing (including Bloom filters and some fun pattern-based material) where the book is just along for the ride.

    I've never written up all the lecture material for my object-oriented programming course in a readable form, so they remain a hodge-podge of readable stuff, slides, and pointers to outside reading. Even still, my dissatisfaction with the available books led me to drop my textbook requirement beginning this spring. I don't know if this was a good idea from the students' perspective and won't know for sure until after the course is over. However, I don't yet see a marked difference in student performance between this semester and last.

    I wonder, though, if I have limited myself by giving myself too much freedom in some of these courses. Maybe if I stuck more closely to some textbook's topics and ordering, I would put myself in a position where I had to create something new and wonderful just to be able to say what I really want to say. Then again maybe I'm just following a natural progression in my each of my courses, starting close to some book and then growing into my own approach. The fact that every course I teach seems to follow a similar trajectory leads me to believe that the latter is happening and that everything is just fine.

    But I still have an incomplete feeling in the back of my mind that strapping onto a decent textbook might be a great way to energize myself again in some course, and unleash a burst of creativity.

    Maybe next fall. I teach Programming Languages again for the first time in a while, and graduate-level Algorithms for the first time ever.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    March 30, 2005 12:45 PM

    ChiliPLoP, The Missing Days

    While posting my last entry, I realized that my blog gives the impression that ChiliPLoP'05 lasted only one day. Actually, it lasted from Tuesday evening through Friday's lunch. The reason I have not written more is that the conference kept me busy with the work I was there to do!

    Our hot topic group had a productive three days. On Days 2 and 3, we continued with our work on the examples we began on Day 1, refining the code for use by students and discussing how to build assignments around them.

    As the conference drew to its close on the last morning, our discussions turned to the broader vision of what a great new first course would be like. By the end of CS1, traditional curricula expose students to lots of complexity in terms of algorithmic patterns: booleans, selection, repetition, collections, and the like. We would like to create a course that does the same for object-oriented programming. Such a course would emphasize: objects, relationships among them, and communication. Concepts like delegation, containment, and substitution would take on prominent roles early in such a course.

    Current textbooks that put objects front and center fail to take this essential step. For example, they may make a date class, but they don't seem to use Dates to then build more complex objects, such as weeks or calendars, or to let lots of date objects collaborate to solve an interesting problem. Most of the programming focus remains in implementing methods, using the familiar algorithmic patterns.

    Our desire to do objects right early requires us to find examples for which the OO style is natural and accessible to beginners. Graphical and event-driven programs make it possible, even preferable, to use interesting objects in interesting ways early on, with no explicit flow of control. But what other domains are suitable for CS1, taking into account both the students' academic level and their expectations about the world of computers?

    Our desire also means that we must frame compelling examples in an object-oriented way, rather than an algorithmic way. For instance, I can imagine all sorts of interesting ways to use the Google API examples I worked on last week to teach loops and collections, including external and internal iterators. But this can easily become a focus on algorithm instead of objects. We need to find ways to inject algorithmic patterns into solutions that are fundamentally based on objects, not the other way around.

    The conference resulted in a strong start toward our goals. Now, our work goes on...


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    March 30, 2005 10:44 AM

    Patterns as a Source of Freedom

    The problem about art is not finding more freedom,
    it's about finding obstacles.
    -- Richard Rogers

    I started teaching introductory programming back in the fall of 1992. Within a year or so, I found myself drawn to a style of teaching that would eventually be called patterns. At the time, I spoke of 'patterns' and 'templates', and it took a while for me to realize that the power of patterns lay more in understanding the forces that shape a pattern as in the individual patterns themselves. But even in their rawest, most template-like form, simple patterns such as Guarded Linear Search were of great value to my students.

    Back when I first starting talking about these ideas with my colleagues, one of them almost immediately expressed reservations. Teaching patterns too early, he argued, would limit students' freedom as they learned to write programs, and these limitations would stunt their development.

    But my experience was exactly the opposite. Students told me that they felt more empowered, not less, but the pattern-directed style, and their code improved. One of the biggest fears students have is of the blank screen: They receive a programming assignment. They sit down in front of a computer, open a text editor, and ... then what? Most intro courses teach ways to think about problems and design solution, but they tend to be abstract guidelines. Even when students are able to apply them, they eventually reach the moment at which they have to write a line of code ... and then what?

    The space of all possible programs that students can write overwhelms many novices. The most immediate value in knowing some programming patterns is that the patterns constrain the space to something more reasonable. If the problem deals with processing items in a collection, then students need only recognize that the problem is a search problem to focus in on a few standard patterns for solving such problems. And each pattern gave them concrete advice for writing actual code to implement the pattern -- along with some things to think about as they customized the code to their specific situation.

    So, rather than limiting novice programmers, patterns freed them. They could now proceed with some confidence in the task of writing code.

    But I think that patterns do more than help programmers become competent. I think that they help already competent programmers become creative.

    You can't create unless you're willing to subordinate the creative impulse to the constriction of a form.
    -- Anthony Burgess

    Just as too much freedom can paralyze a novice with an overabundance of possibility, too much freedom can inhibit the competent programmer from creating a truly wonderful solution. Sticking with a form requires the programmer to think about resources and relationships in a way that unconstrained design spaces do not. This certainly seems to be the case in the arts, where writers, visual artists, and musicians use form as a vehicle for channeling their creative impulses.

    Because I am an academic computer scientist, writing is a big part of my job, too. I learned the value of subordinating my creative impulse to the constriction of a form when I began to write patterns in the ways advocated with the patterns community. I've written patterns in the Gang-of-Four style, the PoSA style, the Beck form, and a modified Portland Form. When I first tried my hand at writing some patterns of knowledge-based systems, I *felt* constricted. But as I became familiar with the format I was using, I began to ask myself how best to express some idea that didn't fit obviously into the form. Often, I found that the reason my idea didn't fit into the form very well is that I didn't have a clear understanding of the idea itself! Holes in my written patterns usually indicated holes in my thinking about the software pattern. The pattern form guided me on the path to a better understanding of content that I thought I already knew well.

    Constraining ourselves to a form is but one instance of a more general heuristic for creating. Sometimes, the annoying little problems we encounter when doing a task can play a similar role. I love this quote from Phillip Windley, reporting some of the lessons that 37signals learned building Basecamp:

    Look at the problems and come up with creative solutions. Don't sweep them under the rug. Constraints are where creative solutions happen.

    Constraints are where creative solutions happen.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    March 24, 2005 11:29 AM

    Day 1 at ChiliPLoP: Examples on the Web

    Our first day at ChiliPLoP was a success. On our first evening in Carefree, my Hot Topic group decided to focus its energy on examples dealing with programs that interact over a network, when possible with services that provide large access to current or compelling data. We alternately worked individually and in pairs on programs that access the current time from the official clock of the US, interact with Google web services, process streaming ML data, and interact with other students via Rendezvous. In just one day, we wrote some simple programs that are within reach of students learning to program in a first CS course.

    We kept each other honest with continuous devil's advocacy. In particular, we are sensitive to the fact that examples of this sort run the risk of scaring many CS1 instructors. They are nontraditional but, more, they carry the impression of being complex and "cutting edge", attributes that don't usually work in CS1. If the instructors don't know much about the Google API or Rendezvous, how can we expect students to get them?

    We decided that we will have to engage this possibility actively. First, we resolved not to use the term "web services" when describing any of our examples, even little-w "web services", in order not to alarm folks before they have a chance to see the simplicity of the programs we create and the value they have for working with students.

    Second, we know that we will have to be quite clear that learning about Rendezvous or the Google API is not the end goal of these examples. The goal is to help students learn how to write programs in an object-oriented language. We will need to package our examples with curricular materials that identify the programming elements and design ideas they best teach. This packaging is for instructors, not students, though as a result the students will benefit by encountering examples at the right point in their development as programmers.

    I am again reminded that a big part of our job in the elementary patterns community is educating computer science faculty, as much or more so than educating students!

    At the end of our day yesterday, we had a nice no-technology discussion out on the veranda of the conference center about our day's work. Devil's advocacy led us into a challenging and sometimes frustrating discussion about the role of "magic" in learning. Some in our group were concerned that the Rendezvous example will look too much like magic to the students, who write very simple objects that interact with very complex objects that we supply. Others argued that students working with objects in a more realistic context is a good thing and, besides, students see magic all the time when learning a language. Java's System.out.println() is serious magic -- bad magic, many of Java-in-CS1's critics would say. But let's be honest: Pascal's writeln() was serious magic, too. Ahh, those critics often say, it's simpler magic (no System.out abstraction) and besides, it's part of the base language. Hence the frustration for me in this discussion, and in so many like it.

    If too much magic is a deal breaker, then I do not think we can we do compelling examples with our students. These days, compelling examples require rich context, and rich context early in the course requires some magical objects.

    But I don't think magic is a deal breaker. The real issue is gratuitous magic versus essential magic. In Java,

    public static void main( String[] args )

    is gratuitous magic. Interacting with powerful and interesting objects is essential magic. It is what object-oriented programming is all about! It's also what can make an example compelling to students raised in an era of instant messaging and Amazon and eBay. What we as instructors need to do is to build suitable boundaries around complexity so that students can explore without fear and ultimately open up the boxes and see that what's inside isn't magic at all but simply more complex examples of the same thing that students have been learning all along: powerful objects sending messages to one another.

    Right now, we are all expanding on our examples from yesterday. Later, I hope that we have a chance to mine some of the elementary patterns that show up across our examples. We are at a PLoP conference after all!


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    March 14, 2005 2:47 PM

    Problems Are The Thing

    In my last blog, I discussed the search for fresh new examples for use in teaching an OO CS1 course. My LazyWeb request elicited many interesting suggestions, some of which have real promise. Thanks to all who have sent suggestions, and thanks for any others that might be on the way!

    While simple graphical ball worlds make for fun toy examples, they don't seem as engaging to me as they did a few years back. As first demos, they are fine. But students these days often find them too simplistic. The graphics that they can reasonably create in the first course pale when compared to the programs they have seen and used. To use ball world, we need to provide infrastructure to support more realistic examples with less boilerplate code.

    Strategy and memory games engage some students quite nicely. Among others, I have used Nim, Simon, and Mastermind. Mastermind has always been my favorite; it has so many wonderful variations and offers so many opportunities to capitalize on good OO design.

    Games of all sorts seem to engage male students more than female students, which is a common point of conversation when discussing student retention and the diversity of computer science students. We need to use a broader and more interesting set of examples if we hope to attract and retain a more broader group of students. (More on whether this should be an explicit goal later.)

    I think that this divide is about something much more important than just the interests of men and women. The real problem with games as our primary examples is that these examples are really about nothing of consequence. Wanting to work on such a problem depends almost solely on one's interest in programming for programming's sake. If students want to do something that matters, then these examples won't engage their interest.

    This idea is at the core of our desire to create a new sort of CS 1 book. As one of my readers pointed out in an e-mail, most introductory programming instruction isn't about anything, either. It "simply marches through the features of whatever language is being used" at the time. The examples used are contrived to make the language feature usable -- not always even desirable, just usable.

    Being about nothing worked for Seinfeld, but it's not the best way to help students learn -- at least not if that's all we offer them. It also limits the audience that we can hope to attract to computing.

    So much of computer science instruction is about solutions and how to make them, but the solutions aren't to anything in particular. That appeals to folks who are already interested in the geeky side of programming. What about all those other folks who would make good computer scientists working on problems in other domains? Trying to create interesting problems around programming techniques and language features is a good idea, but it's backward.

    Interesting problems come from real domains. I learned the same lesson when studying AI as a graduate student in the 1980s. We could build all the knowledge-based systems we wanted as toys to demonstrate our techniques, but no one cared. And besides, where was the knowledge to come from? For toy problems, we had to make it up, or act as our own "domain experts". But you know, building KBS was tougher when we had to work with real domain experts: tax accountants and chemical engineers, plant biologists and practicing farmers. The real problems we worked on with real domain experts exercised our techniques in ways we did not anticipate, helping us to build richer theories at the same time we were building programs that people really used. I thank my advisor for encouraging this mindset in our laboratory from its inception.

    Real domain problems are more likely to motivate students and teachers. They offer rich interconnections with related problems and related domains. Some of these problems can be quite simple, which is a good thing for teaching beginning students. But many have the messy nature that makes interesting ideas matter.

    At OOPSLA last year, Owen Astrachan was touting the new science of networks as a source of interesting problems for introductory CS instruction. The books Linked and Six Degrees provide a popular introduction to this area of current interest throughout academia and the world of the Web. Even the idea of the power law can draw students into this area. I recently asked students to write a simple program that included calculating the logarithm of word counts in a document, without saying anything about why. Several students stopped by, intrigued, and asked for more. When I told them a little about the power law and its role in analyzing documents and explaining phenomena in economics and the Web, they all expressed interest in digging deeper.

    Other problems of this sort are becoming popular. We have started an undergraduate program in bioinformatics and have begun to explore ways to build early CS instruction around examples from that domain. The availability of large databases and APIs opens new doors, too. Google and Amazon have opened their databases to outside programmers. At last year's ITiCSE conference the invited talks all focused on the future of CS instruction by going back to a time in which computing research focused on applied problems.

    If you've been reading here for a while, then you have read about the importance of context in learning before. A big part of Alan Kay's talks there focused on how students can learn about the beauty of computing through doing real science. His eToys project has students build simulations of physical phenomena that they can observe, and in doing so learn a lot about mathematics and computation. But this is a much bigger idea in Kay's work. If you haven't yet, read his Draper Prize talk, The Power Of The Context. It speaks with eloquence about how people have great ideas when they are immersed in an environment that stimulates thought and connections. Kay's article says a lot about how the working conditions in lab foster such an environment, and perhaps most importantly the people. In an instructional setting the teacher and fellow students define most of this part of the environment. Other people can be a part of the environment, too, through their ideas and creations -- see my article on Al Cullum and his catchphrase "a touch of greatness".

    But the problems that we work on are also an indispensable element in the context that motivates people to do great work. In his Draper Prize talk, Kay speaks about how his erox PARC lab worked with educators, artists, and engineers on problems that mattered to them, along with all the messy distractions that those problems entail. Do you think that the PARC folks would have created as many interesting tools and ideas if they had been working "Hello, World" and other toy problems of their own invention? I don't.

    Alan's vision in his OOPSLA talks was to create the computing equivalent of Frank Oppenheimer's Exploratorium -- 500 or more exciting examples with which young people could learn about math, science, computation, reading, and writing, all in context. With that many problems at hand, we wouldn't have to worry as much about finding a small handful of examples for use each semester, as every student would likely find something that attracted him or her, something to engage their minds deeply enough that they would learn math and science and computing just to be able to work on the problem that had grabbed a hold of them. The real problem engages the students, and its rich context makes learning something new worthwhile. Whenever the context of the problem is so messy that distractions inhibit learning, we as instructors have to create boundaries that keep the problem real but let the students focus on what matters.

    Having students work on real problems offers advantages beyond motivation. Remember the problem of attracting and retaining a wider population of students? Real problems may help us there, too. We geeks may like programming for its own sake, but not everyone who could enrich our discipline does. Whatever the natural interests and abilities of students are, different kinds of people seem to different values that affect their choice of academic disciplines and jobs. This may explain some of the difficulty that computer science has attracting and retaining women. A study at the University of Michigan found that women tend to value working with and for people more than men, and that this value accounted at least in part for women tend to choose math and CS careers less frequently: they perceive that math and CS are less directly about people than some other disciplines, even in the sciences. If women do not have this perception, many of our CS1 and 2 courses would give it to them right away. But working on real problems from real domains might send a different signal: computing provides an opportunity to work with other people in more different ways than just about any other discipline!

    I know that this is one of the reasons I so loved working in knowledge-based systems. I had a chance to work with interesting people from all over the spectrum of ideas: lawyers, accountants, scientists, practitioners, .... And in the meantime I had to study each new discipline in order to understand it well enough to help the people with whom I worked. It was a constant stream of new and interesting ideas!

    I don't want you to think that no one is using real problems in their courses. Certainly a number of people are. For example, check out Mark Guzdial's media computation approach to introductory computing courses. Media computation -- programs that manipulate images, sounds, and video -- seems like a natural way to go for students of this generation. I think an example of this sort would make a great starting point for my group's work at ChiliPLoP next week. But Mark's project is one of only a few big projects aimed in this direction. If we are to reach an Exploratorium-like 650 great CS 1 examples, then we all need to pitch in.

    One downside for instructors is that working with real problems requires a lot of work up front. If I want to use the science of networks or genomics as a my theme for a course, then I need to study the area myself well in advance of teaching the course. I have to design all new classroom examples -- and programming assignments, and exam questions. I will probably have to build support software to hide my students from gratuitous complexity in the domain and the programming tools that students use.

    Another potential downside for someone at a small school is that an applied theme in your course may appeal to some students but not others, and your school can only teach one or a few sections of the course at any one time. This is where Alan Kay's idea of having a large laboratory of possibilities becomes so appealing.

    This approach requires work, but my ChiliPLoP colleagues seem willing to take the plunge. I'll keep you posted on our efforts and results as they progress.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    March 13, 2005 11:35 AM

    Looking for Engaging Examples

    Some of my favorite colleagues and I will be getting together in a week or so for ChiliPLoP 2005, where we will continue to work on our longstanding project, to produce a different sort of textbook for the CS 1 course. We'd like to create a set of instructional units built around engaging and instructive applications. In a first course, these applications will have to be rather small, but we believe students can learn more and better about how to program when their code has a context.

    In many ways, Mike Clancy's and Marcia Linn's classic Designing Pascal Solutions serves as my goal. Clancy and Linn did for structured programming and Pascal what I'd like for our work to do for object-oriented programming and Java (or whatever succeeds it). Their case studies focus on simple yet complete programs such as a calendar generator, a Roman numeral calculator, and a text processor, using them as the context in which students learn the basics of computation and Pascal syntax. Along the way, they also learn to something about how to write programs which is, in many ways, the central point of the course. This book, and its data structures follow-up, implement a wonderful teaching idea well. I think we can do this for OOP, and can use what we've learned about patterns in the intervening 15 years to do it in an especially effective way.

    Such an approach requires that we identify and work out in some detail several such examples. The old examples worked great in a text-oriented world in which students' experience with computers was rather limited, but we can surely do better. Our students come to the university with broad experience interacting with computer systems. Cell phones, iPods, and TiVo are a part of the fabric of their lives. Besides, objects and languages like Java bring graphical apps, client-server apps, web apps, and other more sophisticated programs within reach.

    A canonical first example for a graphical OOP introduction to programming is the ball world, a framework for simple programs in which balls and other graphical elements move about some simulated world, interacting in increasingly sophisticated ways. The folks at Brown and Duke have been developing this line of examples for a decade or more, and Tim Budd wrote a successful book aimed at students in a second or third course in which this is the first "real" Java program students see.

    But ball world is tired, and besides it doesn't make much of a connection to the real world of computing these days. It might work fine to introduce students to the basics of Java graphics, but it needs serious work as an example that can both be used as early in CS 1 and engage many different students.

    This is drifting dangerously toward a longer and different essay that I've been meaning to write for a while now, but I don't have the time this morning. That essay will have to wait until tomorrow. In the meantime, I'll leave you with a catchphrase that Owen Astrachan was touting at OOPSLA last fall: Problems are the thing. Owen's right.

    Returning to the more immediate issue of ChiliPLoP 2005 and our Hot Topic, if you have an idea for a motivating example that might engage beginning programmers long enough for us to help them learn a bit about programming and objects, please pass it on. Maybe we can make it one of our working examples!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    March 09, 2005 3:25 PM

    The Capacity for Experience

    'Men are wise in proportion, not to their experience,
    but to their capacity for experience.
    -- George Bernard Shaw

    Yesterday in my CS II course, my students and I discussed some of the possible designs for their current assignment and how the decisions each programmer makes will change the nature of the program each writes. Later in the day, I received an e-mail message from one of the students. It ended:

    P.S. Got home today, and started lab over, got to the same spot as before in under 2 hours, code is much easier to read and look at! Thanks.

    My immediate thought was, "This guy has what it takes."

    When I encounter such students, I don't need to see their code to know that they will eventually become good software developers, if their interest in computing stays strong. They have the most important characteristic of any learner: the capacity for experience.

    This idea seems to be a theme running through many of my own experiences and reading lately. First, I remember reading this piece by Johanna Rothman. It distinguishes "several years of experience" from "the same year of experience several times". Johanna's blog focuses on hiring technical people, and she suggests questions you can ask job applicants in order to decide which kind of person they are. A candidate question is "Tell me about the first time you did that kind of work."

    People who know the difference between the same year of experience multiple times and multiple years of experience have answers to these questions. People who do the same thing year-in, year-out don't have good answers.

    I see the same thing in college students. Some students would use Johanna's question as a springboard into an engaging conversation about what they learned in the process of that first time. We end up talking not only about the first time but many times since. Even in the conversation we both may end up learning something, as we complement one another's experiences with our own.

    Other students look at me unknowingly. They are often the ones who are still hitting the delete key 67 times instead of having taken the time to learn dd in vi (ctrl-k or something similar for emacs devotees :-).

    Then I re-read Malcolm Gladwell's The Physical Genius after running into Pragmatic Dave's short note on making mistakes. I first read Gladwell's article a few years ago when someone first introduced me to his writing, but this time time it struck me in my experiential theme. Not only do these physical geniuses prepare obsessively, they also share a deep capacity for recognizing their own mistakes and learning from them. Rather than hide their errors behind a wall of pride, they almost revel in having recognized them and overcome them. They use their experiences to create experience.

    Dave quotes Gladwell quoting Charles Bosk, who had tried to find a way to distinguish good surgeons from the not so good:

    In my interviewing, I began to develop what I thought was an indicator of whether someone was going to be a good surgeon or not. It was a couple of simple questions: Have you ever made a mistake? And, if so, what was your worst mistake? The people who said, 'Gee, I haven't really had one,' or, 'I've had a couple of bad outcomes but they were due to things outside my control' - invariably those were the worst candidates. And the residents who said, 'I make mistakes all the time. There was this horrible thing that happened just yesterday and here's what it was.' They were the best. They had the ability to rethink everything that they'd done and imagine how they might have done it differently."

    Johanna Rothman would probably agree with Bosk's approach.

    Some folks don't seem to have the mindset of Bosk's successful surgeons, at least by the time they reach my classroom, or even when they reach the workplace. In that same CS II class yesterday, I sensed that a few of the students were uncomfortable with the notion that they themselves had to make these design decisions. What if they made the wrong choice? I suggested to them that none of the choices was really wrong, and that each would make some things easier and other things harder. Then I told them that, even if they came to rue their choice of an Object[] over a Vector, that's okay! They can rest assured that the alternative would have been rueful in another way, and besides they will have learned something quite useful along the way. That was no comfort at all to the students who aren't uncomfortable.

    So much of what students learn in college courses isn't listed in the course catalog description. I hope I can help some of my students to learn new habits of mind in this regard. Developing a capacity for experience is more important than learning the habit of TDD.

    Then again, software people with a capacity for experience seem to gather in places where practices like TDD become a focal point. Patterns. Pragmatic programming. XP. Agile software development. All grew out of folks reflecting on their successes and failures with humility and asking, "How can we do better?" The student who sent me that e-mail message will fit in quite nicely.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    March 04, 2005 4:06 PM

    Creativity, Productivity, Discipline, Flow

    When I first learned that Mihaly Csikszentmihalyi (whose name is pronounced 'me-HI chick-SENT-me-hi') would be speaking here, I hoped that he might be able to help me to understand better the role of flow in learning and programming. These ideas have been on mind occasionally through experience with the Suzuki method of piano instruction and Alan Kay's comments on the same in his talks at OOPSLA. When I left the lecture room last night, I was a bit disappointed with the talk, but as time passes I find that its ideas have me thinking...

    Csikszentmihalyi opened by explaining that his interest creativity began as a child in Hungary. Many of his family members were artists and lived rich lives, but most people he knew lived "fragile lives", all too sensitive to the vagaries of war and economy. He chose to study creative people to see how everyone can live a better life, a less fragile life, a beautiful life.

    Through his studies, he came to distinguish "small c" creativity and "big C" Creativity. To some degree, we all are creative in the small-c way, doing things that enrich our own lives but do not receive recognition from the outside world. Big-c creativity is different -- it produces ideas that "push our species ahead". At first, the use of recognition to distinguish quality of creativity seemed incongruent, but I soon realized that creative ideas take on a different form when they move out into the world and mingle with the rest of a domain's knowledge and culture. Csikszentmihalyi came back to this nugget later in his talk.

    Big-C creativity is rare. Defining it is impossible, because every definition seems to over- or underconstrain something essential to creativity from our own experience. But Csikszentmihalyi asserted that society values Creativity for its ability to transform people and cultures, and that it drives creative people to "pursue to completion" the creative act.

    To support his claim of scarcity and to begin to expose the culture's role in recognizing creativity, Csikszentmihalyi presented some empirical studies on the relationship between individuals' contributions and their disciplines. The first was the so-called Lotka Curve. Alfred Lotka was an eminent biophysicist of the early 1900s. In the mid-1920s, he identified a pattern of publication in scientific journals. Roughly 60% of all people who publish publish only one journal article. The percentage of people who publish two articles is much smaller, and the percentage of people who publish more articles falls rapidly as the number of articles increases. This creates an L-shaped graph of the function mapping number of publications onto the percentage of contributors at that level.

    The second was Price's Law. He referred not to the law from physics, which describes a model of gravitational collapse and the "strong cosmic censorship conjecture" (a great name!), but to another model of contribution: one-half of all contributions in any domain are made by the square root of all potential contributors. According to Csikszentmihalyi, Price derived his law from data in a large variety of domains.

    I do not have citations to refereed publications on either of these models, so I'm at the speaker's mercy as to their accuracy. The implication is substantial: in any group, most people contribute little or nothing to the group. Perhaps that is stated too strongly as a generalization, because a single publication or a single piece of art can make a major contribution to the world, belying the creator's point on the Lotka Curve. But if these models mean what Csikszentmihalyi claims, culture and even narrower domains of discourse are driven forward by the work of only a few. I don't think that this will alarm too many people. It sounds just like the long tail and power law that dominates discussion of the web and social networks these days.

    Finally Csikszentmihalyi got around to describing his own results. Foremost was this model of how creativity and ideas affect the world:

    The culture transmits information to people. Some people are happy to keep it at that, to absorb knowledge and use it in their lives. These folks accept the status quo.

    The creative person, though, has the idea that he can change the world. He produces a novelty and pushes it out for others see. Only a small percentage of folks do this, but the number is large enough that society can't pay attention to all of the novelties produce.

    A field of discourse, such as an academic discipline or "the art world", selects some of the novelties as valuable and passes them onto the culture at large with a seal of approval. Thus the field acts as a gatekeeper. It consists of the critics and powerbrokers esteemed by the society.

    When there doesn't seem to be enough creativity for rapid change in a domain, the problem is rarely with the production of sufficient ideas but in the field's narrow channel for recognizing enough important novelties. I suppose that it could also come back to a field's inability to accurately evaluate what is good and what isn't. The art world seems to go through phases of this sort with some regularity. How about the sciences?

    Csikszentmihalyi has devoted much of his effort to identifying common characteristics in the personalities of creative individuals. The list of ten characteristics he shared with his audience had some predictable entries and some surprises to me:

    1. great energy, "bounce"
    2. convergent thinking
    3. playfulness, openness to new experience
    4. imagination, fantasy
    5. extroverted, sociable
    6. ambitious, proud, creative
    7. sensitive, feminine (not particularly aggressive)
    8. traditional, conservative (not particularly rebellious)
    9. attached, involved, passionate
    10. suffering, vulnerable, insecure (not particularly joyful, strong, or self-confident)

    I'm not surprised to see high energy, openness to new experience, ambition, passion, ... on the list. They are just what I expect in someone who changes the world. But Numbers 7, 8, and 10 seemed counterintuitive to me. But some reflection and further explanation made sense of them. For example, #7 is a simplification, on the notion that historically many of the biggest contributors of ideas and works of art to society have been men. And these creative men tend to have personality traits that society usually associates with women. This element works the other way, too. Major female contributors tend to have personality traits that society usually associates with men. So this element might more accurately be labeled outside society's gender expectations or somesuch.

    (And before anyone feels the needs to rain flame down on me a lá the recent Larry Summers fiasco, please note that I recognize fully the role that socialization and and other social factors play in the documented history of male and female contributions to the world. I also know that it's just a generalization!)

    Convergent thinking and conservatism also seemed out of place on the list, but they make sense when I consider Csikszentmihalyi's systemic model of the flow of contributions. In order to affect the world, one must ordinarily have one's idea vetted by the field's powerbrokers. Rebelliousness isn't usually the best means to that end. The creative person has to balance rule breaking with rule following. And convergence of thought with ideas in the broader culture increases the likelihood of new ideas being noticed and finding a natural home in others' minds. Ideas that are too novel or too different from what is expected are easy to admire and dismiss as unworkable.

    This talk didn't deal all that much with Csikszentmihalyi's signature issue, flow, but he did close with a few remarks from folks he had studied. Creators seem to share a predilection to deep attention to their work and play. In such moments, the ordinary world drops beyond the scope of their awareness. He displayed a representative quote from poet Mark Strand:

    The idea is to be so ... so saturated with it that there's no future or past, it's just an extended present in which you're, uh, making meaning. And dismantling meaning, and remaking it.

    I'm guessing that every programmer hears that quote and smiles. We know the feeling--saturation, making meaning.

    Csikszentmihalyi closed his talk with a couple of short remarks. The most interesting was to refute the common perception that Creative people are prone to living dissolute lives. On the contrary, the majority of the folks he has studied and interviewed have been married for all of their adult lives, have families, participate at their churches in in civic organizations. They have somehow come to balance "real life" with sustained episodes of flow. But, true to Personality Trait #10 on the list above, they all feel guilty about not having spent more time with their spouses and children!

    (At this point, I had to leave the talk just as the question-and-answer session was beginning. I had to pick my daughters up from play rehearsal. :-)

    This talk has led me to a few thoughts...

    • If the world is changed by a rather small number of contributors, where does that leave the rest of us? Certainly, we can be creative and improve our own lives as a result. But we can also improve the lives of those around us. And we shouldn't think that we will never have effects beyond our local world. Contributions of the small sort can sometimes lead to effects we never anticipated.

    • Csikszentmihalyi mentioned the significant role that luck plays in both the creative act and its entry into the culture. I liken this to making money in the stock market. Most of the growth in the market occurs on a few days of extreme growth; if you want to make money, you have to be in the market when those unpredictable days pass by. You can try to time the market, but the risk of being out of the market for a few days is much higher than the marginal increase you may achieve. Likewise, moments of inspiration happens sometimes when you least expect them. The key is to be active, engaged, and aware. If you are doing when inspiration comes, you will be able to capitalize. If you are dawdling or wasting time on the web, then you'll miss it.

      And, to the extent that we can stimulate an environment in which the creative moment can occur, you need to active, engaged, and aware. That is a major component of the success that we see in highly productive people.

    • I believe that a person can cultivate the personality traits shared by creative people. Some are easier to practice than others, and what's toughest to cultivate differs from person to person. I'm sociable but not extroverted, so I have to work at engaging people more than some other folks. And, while I think I'm relatively insecure compared to other academics, I don't naturally expose myself as vulnerable. But these are habits as much as anything, and they can be unlearned. Some folks may have gifts that give them a head start to being creative and productive, we all can be, if we choose to work hard to change.

    • What about the agile software community? The agile approaches encourage "small" practices, the sort of small-c creativity that can improve the developer's life and the developer's company. But they can also improve the lives of clients. And I believe that the cumulative effect of simple practices, can be a qualitative difference in how we behave -- from which large-c Creativity can emerge. If a person has the right mindset, they can sometimes make great leaps in understanding.

    • Many people have engaged in practices we now call agile for decades. But a select few decided to "go public", to try to change how software is built in the world. They sought to bring social attention to these ideas and to build a new community of discourse. The folks who started the Agile Alliance, the folks who have tried to take these ideas into the testing community... they are engaging in big-c Creativity, even as it emerged from lots of little acts -- and even if they hadn't planned to.

    • I suspect some of the leaders of the agile movement did start out with the intention of changing the world. Guys like Kent Beck, Ward Cunningham, and Alistair Cockburn have a vision that they want to bring to the community as a whole. I admire them for their ambition and energy. I also admire that they are willing to learn in the public and adapt. For example, Kent learned a lot in writing and promoting XP Explained, and now he has written a second edition that embodies what he has learned -- and he tells readers upfront that it's different, and that he has learned.

    With so many great books to read, now that I have seen Csikszentmihalyi speak, I doubt that I'll read "Flow" any time soon. But I think its ideas will continue to percolate in mind.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    February 26, 2005 5:39 PM

    Resolved: "Object Early" Has Failed

    The focal event of Day 3 at SIGCSE is the 8:30 AM special session, a panel titled Resolved: "Object Early" Has Failed. It captures a sense of unrest rippling through the SIGCSE community, unrest at the use of Java in the first year, unrest at the use of OOP in the first year, unrest at changes that have occurred in CS education over the last decade or more. Much of this unrest confounds unhappiness with Java and unhappiness with OOP, but these are real feelings felt by good instructors. But there are almost as many people uneasy that, after five or more years, so few intro courses seem to really do OOP, or at least do it well. These folks are beginning to wonder whether objects will ever take a place at the center of CS education, or if we are destined to teach Pascal programming forever in the language du jour. This debate gives people a chance to hear some bright folks on both sides of the issue share their thoughts and answer questions from one another and the audience.

    The panel is structured as a light debate among four distinguished CS educators:

    • Stuart "17 years after writing my Pascal book, I'm teaching the same stuff in Java... Now that's fun" Reges
    • Eliot "'Objects first' failed? Not according to Addison Wesley" Koffman
    • Michael Kölling, of BlueJ fame
    • Kim Bruce, whom you've read about here earlier

    Our fearless leader is Owen "Moderator with a Law" Astrachan. The fact that the best humor in the introductions was aimed at the pro-guys may reveal Owen's bias, or maybe Stuart and Eliot are just easier targets...

    Eliot opened by defining "objects early" as a course that on using and writing classes before writing control structures and algorithm development. He made the analogy to new math: students learn "concepts" but can't do basic operations well. Most of Eliot's comments focused on faculty's inability or unwillingness to change and on the unsuitability of the object-oriented approach for the programs students write at the CS1 level. The remark about faculty reminded me of a comment Owen made several years ago in a workshop, perhaps in an Educators Symposium at OOPSLA, that our problem here is no longer legacy software but legacy faculty.

    Michael followed quite humorously with a send-up of this debate as really many debates: objects early, polymorphism early, interfaces early, GUIs early, events early, concurrency early... If we teach all of these in Week 1, then teaching CS 1 will be quite nice; Week 2 is the course wrap-up lecture! The question really is, "What comes last?" Michael tells us that objects haven't failed us; we have failed objects. Most of us aren't doing object yet! And we should start. Michael closed with a cute little Powerpoint demo showing a procedural-oriented instructor teaching objects not by moving toward objects, but by reaching for objects. When you reach too far without moving, you fall down. No surprise there!

    Stuart returned the debate to the pro side. He sounded like someone who had broken free of a cult. He said that, once, he was a true believer. He drank the kool-aid and developed a CS 1 course in which students discussed objects on Day 2. He even presented a popular paper on the topic at SIGCSE 2000, Conservatively Radical Java in CS1. But over time he found that, while his good students succeeded in his new course, the middle tier of students struggled with the "object concept". He is willing to entertain the idea that the problem isn't strictly with objects-first but with the overhead of OOP in Java, but pragmatic forces and a herd mentality make Java the language of choice for most schools these days, so we need an approach that works in Java. Stuart lamented that his students weren't getting practice at decomposing problems into parts or implementing complete programs on their own. Students seem to derive great pleasure in writing a complete, if small, program to solve a problem. This works in the procedural style, where a 50- to 100-line can do something. Stuart asserted that this doesn't work with an OO style, at least in Java. Students have to hook their programs in with a large set of classes, but that necessitates programming to fixed API. The result just isn't the same kind of practice students used to get when we taught procedural programming in, um, Pascal. Stuart likens this to learning to paint versus learning to paint-by-the-numbers. OOP is, to Stuart, paint-by-the-numbers -- and it is ultimately unsatisfying.

    Stuart's contribution to the panel's humorous content was to claim that the SIGCSE community was grieving the death of "objects early". Michael, Kim, and their brethren are in the first stage of grief, denial. Some objects-early folks are already in the stage of anger, and they direct their anger at teachers of computer science, who obviously haven't worked hard enough to learn OO if they can't succeed at teaching it. Others have moved onto the stage of bargaining: if only we form a Java Task Force or construct the right environment or scaffolding, we can make this work. But Stuart likened such dickering with the devil to cosmetic surgery, the sort gone horribly wrong. When you have to do that much work to make the idea succeed, you aren't doing cosmetic surgery; you are putting your idea on life support. A few years ago, Stuart reached the fourth stage of grief, depression, in which he harbored thoughts that he was the alone in his doubts, that perhaps he should just retire from the business. But, hurray!, Stuart finally reached the stage of acceptance. He decided to go back to the future to return to the halcyon days of the 1980s, of simple examples and simple programming constructs, of control structures and data structures and algorithm design. At last, Stuart is free.

    Kim closed the position statement portion of the panel by admitting that it is hard work for instructors who are new to OO to learn the style, and for others to build graphics and event-driven libraries to support instruction. But the work is worth the effort. And we shouldn't fret about using "non-standard libraries", because that is how OO programming really works. Stuart followed up with a question: Graphics seems to be the killer app of OOP; name two other kinds of examples that we can use. Kim suggested that the key is not graphics themselves but the visualization and immediate feedback they afford, and pointed to BlueJ as an environment provides these features to most any good object.

    In his closing statement for the con side, Michael closed with quote attributed Planck:

    A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

    Stuart's closing statement for the pro side was more serious. It returned to structured programming comparison. It was hard to make the switch to structured programming in CS education. Everyone was comfortable with BASIC; now they had to buy a Pascal compiler for their machines; a compiler might not exist for the school's machines; .... But the trajectory of change was different. It worked, in the sense that people got it and felt it was an improvement over the old way -- and it worked faster than the switch to OOP has worked. Maybe the grieving is premature. Perhaps objects-early hasn't failed yet -- but it hasn't succeeded yet, either. According to Stuart, that should worry us.

    The folks on the panel seemed to find common ground in the idea that objects-early has neither succeeded nor failed yet. They also seemed to agree that there are many reasonable ways to teach objects early. And most everyone seemed to agree that instructors should use a language that works best for the style of programming they teach. Maybe Java isn't that language.

    In the Q-n-A session that followed, Michael made an interesting observation: We are now living through the lasting damage of the decision by many schools to adopt C++ in CS 1 over a decade ago. When Java came along, it looked really good as a teaching language only because we were teaching C++ at the time. But now we see that it's not good enough. We need a language suitable for teaching, in this case, teaching OOP to beginners. (Kim reminded us of Michael's own Blue language, which he presented at the OOPSLA'97 workshop on Resources for Early Object Design Education.)

    I think that this comment shows an astute understanding of recent CS education history. Back when I first joined the CS faculty here, I supported the move from Pascal to C++ in CS 1. I remember some folks at SIGCSE arguing against C++ as too complex, too unsettled, to teach to freshmen. I didn't believe that at the time, and experience ultimately showed that it's really hard to teach C++ well in CS 1. But the real damage in everyone doing C++ early wasn't in the doing itself, because some folks succeeded, and the folks who didn't like the results could switch to something else. The real damage was in creating an atmosphere in which even a complex language such as Java looks good as a teaching language, an environment in which we seem compelled by external forces to teach an industrial language early in our curricula. Perhaps we slid down a slippery slope.

    My favorite question from the crowd came from Max Hailperin He asked, "Which of procedural programming and OOP is more like the thinking we do in computer science when we aren't programming? The implication is that the answer to this question may give us a reason for preferring one over the other for first-year CS, even make the effort needed to switch approaches a net win over the course of the CS curriculum. I loved this question and think it could be the basis of a fun and interesting panel in its own right. I suspect that, on Max's implicit criterion, the mathematical and theoretical sides of computing may make procedural programming the preferred option. Algorithms and theory don't seem to have objects in them in the same way that objects populate an OO program. But what about databases and graphics and AI and HCI? In these "applications" objects make for a compelling way to think about problems and solutions. I'll have to give this more thought.

    After the panel ended, Robert Duvall commented that the panel had taught him that the so-called killer examples workshop that has taken place at the last few OOPSLAs have failed. Not that idea -- having instructors share their best examples for teaching various OO concepts -- is a bad one. But the implementation has tended to favor glitzy examples, complicated examples. What we need are simple examples, that teach an important idea with the minimum overhead and minimum distraction to students. I'm certainly not criticizing these workshops or the folks who organize them, nor do I think Robert is; they are our friends. But the workshops have not yet had the effect that we had all hoped for them.

    Another thing that struck about this panel was Stuart's relative calmness, almost seriousness. He seems at peace with his "back the '80s" show, and treats this debate as almost not a joking matter any more. His demeanor says something about the importance of this issue to him.

    My feeling on all of this was best captured by a cool graphic that Owen posted sometime near the middle of the session:

    The second attempt to fly Langley's Aerodrome on December 8, 1903, also ended up in failure. After this attempt, Langley gave up his attempts to fly a heavier-than-air aircraft.

    (Thanks to the wonders of ubiquitous wireless, I was able in real time to find Owen's source at http://www.centennialofflight.gov/.)

    I don't begrudge folks like Stuart and Elliott finding their comfort point with objects later. They, and more importantly their students, are best served that way. But I hope that the Michael Köllings and Kim Bruces and Joe Bergins of the world continue to seek the magic of object-oriented flight for CS 1. It would be a shame to give up on December 8 with the solution just around the corner.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    February 25, 2005 3:43 PM

    Day 2 at SIGCSE: Another Keynote on Past and Present

    This morning's keynote was given by last year's recipient of SIGCSE's award for outstanding contributions to CS education, Mordechai (Moti) Ben-Ari. Dr. Ben-Ari was scheduled to speak last year but illness prevented him from going to Norfolk. His talk was titled "The Concorde Doesn't Fly Anymore" and touched on a theme related to yesterday's talk, though I'm not certain he realized so.

    Dr. Ben-Ari gave us a little history lesson, taking us back to the time of the US moon landing. He asserted that this was the most impressive achievement in the history of technology and reminded us that the project depended on some impressive computing -- a program that was delivered six months in advance of the mission, which used only 2K of writable memory.

    Then he asked the audience to date some important "firsts" in computing history, such as the year the first e-mail was sent. I guessed 1973, but he gave 1971 as the right answer. (Not too bad a guess, if I do say so myself.) In the end, all of the firsts dated to the period 1970-1975 -- just after that moon landing. So much innovation in a such a short time span. Ben-Ari wondered, how much truly new have we done since then? In true SIGCSE fashion, he made good fun of Java, a language cobbled out of ideas discovered and explored in the '60s and '70s, among them Simula, Smalltalk, Pascal, and even C (whose contribution was "cryptic syntax").

    The theme of the talk was "We in computing are not doing revolutionary stuff anymore, but that's okay." Engineering disciplines move beyond glitz as they mature. Valuable and essential engineering disciplines no longer break much new ground, but they have steady, sturdy curricula. He seemed to say that we in computing should accept that we have entered this stage of development and turn CS into a mature educational discipline.

    His chief example was mechanical engineering. He contrasted the volume of mathematics, science, and settled engineering theory and concepts required by the ME program at his university with the same requirements in the typical CS program. Seven math courses instead of four; five science courses instead of two; a dozen courses in engineering foundations instead of three or four in computing. Yet we in CS feel a need to appear "relevant", teaching new material and new concepts and new languages and new APIs. No one, he said, complains that mechanical engineering students learn about pulleys and inclined planes -- 300-year-old technology! -- in their early courses, but try to teach 30-year-old Pascal in a CS program and prepare for cries of outrage from students, parents, and industry.

    In this observation, he's right, of course. We taught Ada in our first-year courses for a few years in the late 1990s and faced a constant barrage of questions from parents and students as to why, and what good would it be to them and their offspring.

    But in the larger scheme of things, though, is he right? It seems that Dr. Ben-Ari harkens back to the good old days when we could teach simple little Pascal in our first course. He's not alone in this nostalgia here at SIGCSE... Pascal has a hold on the heartstrings of many CS educators. It was a landmark the history of CS education, when a single language captured the zeitgeist and technology of computing all at once, in a simple package that appealed to instructors looking for something better and to students who could wrap their heads around the ideas inside of Pascal and the constructs that made up Pascal in their early courses.

    A large part of Pascal's success as a teaching language lay in how well it supplanted languages such as Fortran (too limited) and PL/I (too big and complex) in the academic niche. I think PL/I is a good language to remember in this context. To me, Java is the PL/I of the late 1990s and early 2000s: a language that aspires to do much, a language well-suited to a large class of programs that need to be written in industry today, and a language that is probably too big and complex to serve all our needs as the first language our students see in computer science.

    But that was just the point of Kim Bruce's talk yesterday. It is our job to build the right layer of abstraction at which to teach introductory CS, and Java makes a reasonable base for doing this. At OOPSLA last year, Alan Kay encouraged us to aspire to more, but I think that he was as disturbed by the nature of CS instruction as with Java itself. If we could build an eToys-like system on top of Java, then Alan would likely be quite happy. (He would probably still drop in a barb by asking, "But why would you want to do that when so many better choices exist?" :-)

    In the Java world, many projects -- BlueJ, Bruce's ObjectDraw, JPT, Karel J. Robot, and many others -- are aimed in this direction. They may or may not succeed, but each offers an approach to focusing on the essential ideas while hiding the details of the industrial-strength language underneath. And Ben-Ari might be happy that Karel J. Robot's pedagogical lineage traces back to 1981 and the Era of Pascal!

    As I was writing the last few paragraphs, Robert Duvall sat down and reminded me that we live in a different world than the one Pascal entered. Many of our incoming students arrive on campus with deep experience playing with and hacking Linux. Many arrive with experience building web sites and writing configuration scripts. Some even come in with experience contributing to open-source software projects. What sort of major should we offer such students? They may not know all that they need to know about computer science to jump to upper-division courses, but surely "Hello, World" in Pascal or C or Java is not what they need -- or want. And as much as we wax poetic about university and ideas, the world is more complicated than that. What students want matters, at least as it determines the desire they have to learn what we have to teach them.

    Ben-Ari addressed this point in a fashion, asserting that we spend a lot of time trying to making CS easy, but that we should be trying to make it harder for some students, so they will be prepared to be good scientists and engineers. Perhaps so, but if we construct our programs in this way we may find that we aren't the ones educating tomorrow's software developers. The computing world really is a complex mixture of ideas and external forces these days.

    I do quibble with one claim made in the talk, in the realm of history. Ben-Ari said, or at least implied, that the computing sciences and the Internet were less of a disruption to the world than the introduction of the telegraph. While I do believe that there is great value in remembering that we are not a one-of-a-kind occurrence in the history of technology -- as the breathless hype of the Internet boom screamed -- I think that we lack proper perspective for judging the impact of computing just yet. Certainly the telegraph changed the time scale of communication by orders of magnitude, a change that the Internet only accelerates. But computing affects so many elements of human life. And, as Alan Kay is fond of pointing out, its potential is far greater as a communication medium than we can even appreciate at this point in our history. That's why Kay reminds us of an even earlier technological revolution: the printing press. That is the level of change to which we in computing should aspire, fundamentally redefining how we talk about and do almost everything we do.

    Ben-Ari's thesis resembles Kay's in its call for simplicity, but it differs in many other ways. Are we yet a mature a discipline? Depending on how we answer this question, the future of computer science -- and computer science education -- should take radically different paths.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    February 24, 2005 3:08 PM

    Day 1 at SIGCSE

    After a busy week or two at home, I am now on the road at SIGCSE'05, the primary conference on computer science education. On Saturday evening, I am co-leading with Joe Bergin The Polymorphism Challenge, a workshop in which participants will learn more about dynamic polymorphism by taking it extremes on some common problems. But for the next few days I get to attend sessions and catch up with colleagues, in hopes of learning a few new tricks myself.

    This morning, I attended two sessions. The first was keynote address by Kim Bruce, this year's recipient of SIGCSE's award for outstanding contributions to CS education. Kim has been involved in a number of CS education projects, and his keynote talk showed how his most recent project makes it possible to teach introductory computing topics such as assignment and recursion in a fundamentally different way. His main point: We should use abstraction to make complex ideas more concrete.

    Alan Kay said something similar in his OOPSLA talks last fall. As a medium, computer programs give us new ways to make complex ideas concrete and manipulable, ways that before were impractical or even impossible. We computer science teachers need to seek more effective ways to use our own discipline's power to bridge the gap between complex ideas and the learner's mind.

    In any event, Kim used two quotes about abstraction that I liked a lot. The first, I knew:

    Fools ignore complexity; pragmatists suffer it; experts avoid it; geniuses remove it. ... Simplicity does not precede complexity, but follows it.

    -- Alan Perlis

    The second was new but is surely well known to many of you:

    A good teacher knows the right lies to tell.

    Using abstractions requires telling a certain kind of lie -- and also creates the subproblem of figuring out how and when to tell the fuller truth.

    The second was a report of the ACM Education Board Java Task Force, which is designing a set of packages for the express purpose of using Java in CS 1. In a sense, these packages are yet another way to abstract away the complexities and idiosyncracies of Java so that beginning students can see the essential truths of computing.

    There isn't much new in this work, and we can all quibble with some of the group's decisions, but the resulting product could be useful in unifying the software base used by different institutions to teach introductory CS -- if folks choose to use it. I have to admit, though, that several times during the presentation I thought to myself, "Well, of course. That's what we've been saying for years!" And once I even found myself saying, "That's just wrong." I'm sometimes surprised by how little effect mainstream and academic OOP have had on the CS education community. (Perhaps it's ironic that I'm writing this while listening in on a session about the importance of the history of computing for teaching computer science.)


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    February 18, 2005 2:20 PM

    Never Been Compared to a Barrel of Boiling Oil Before

    I hope that my Algorithms course doesn't have this kind of reputation among our students!

    Teaching Algorithms is a challenge different from my other courses, which tend to be oriented toward programming and languages. It requires a mixture of design and analysis, and the abstractness of the design combines with the theory in the analysis to put many students off. I try to counter this tendency by opening most class sessions with a game or puzzle that students can play. I hope that this will relax students, let them ease into the session, and help them make connections from abstract ideas to concrete experiences they had while playing.

    Some of my favorite puzzles and games require analysis that stretches even the best students. For example, in the last few sessions, we've taken a few minutes each day to consider a puzzle that David Ginat has called Election. This is my formulation:

    An election is held. For each office, we are given the number of candidates, c, and a text file containing a list of v votes. Each vote consists of the candidate's number on the ballot, from 1 to c. v and c are so large that we can't use naive, brute-force algorithms to process the input.

    Design an algorithm to determine if there is a candidate with a majority of the votes. Minimize the space consumed by the algorithm, while making as few passes through the list of votes as possible.

    We have whittled complexity down to O(log c) space and two passes through the votes. Next class period, we will see if we can do even better!

    I figure that students should have an opportunity to touch greatness regularly. How else will they become excited by the beauty and possibilities of an area that to many of them looks like "just math"?


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    February 11, 2005 4:17 PM

    Taking Projects to an Extreme

    One of my favorite TV shows back when I was little guy in the 1970s was The Paper Chase. It was a show about a group of law students, well, being law students. It was based on a 1973 movie of the same name. It's not the sort of show that tends to hang around the TV schedule very long; it lasted only a year or two on network television. Even still, I fell in love with the quiet drama of university life while watching this show.

    Whether you remember the show or not, you may well know its most famous character, Professor Charles W. Kingsfield, Jr., played with panache by the eminent John Houseman. Professor Kingsfield was in many ways the archetype for the brilliant scholar and demanding yet effective university teacher that one so often sees on film. Growing up, I always hoped that I might encounter such a professor in my studies, certain that he would mold me into the great thinker I thought I should be. In all my years of study, I came across only one teacher who could remind me of Kingsfield: William Brown, an old IBM guy who was on the faculty at Ball State University until his retirement. Many professors demanded much of us, and some were brilliant scholars, but only Dr. Brown had the style that made you feel honor in the fear that you might not meet his standard. I had plenty of great professors at Ball State, but none like Dr. Brown.

    Why this reminiscence on a Friday afternoon 20 or 25 years later? I thought of John Houseman and a particular episode of The Paper Chase yesterday afternoon. The episode revolved around a particularly laborious assignment that Kingsfield had given his Contracts class. (Kingsfield made contract law seem like the most lively intellectual topic of all!) The assignment required students to answer one hundred questions about the law of contracts. Some of these questions were relatively routine, while others demanded research into hard-to-find articles and journals from way back. Of course, students worked together in study groups and so shared the load across four or five people, but even so the assignment was essentially impossible to complete in the time allotted.

    While sharing their despair, our protagonists -- Mr. Ha-a-a-rt and his study group -- stumbled upon a plan: why not share the load with other study groups, too? Soon, all the study groups were working out trades. They couldn't trade answers one for one, because some groups had worked on the hardest questions first, so the answers they offered were more valuable than those of a group that had cherry-picked the easy ones first. By the end of the episode, the groups had drawn up contracts to guide the exchange of information, executed the deals, and submitted a complete set of answers to Kingsfield.

    As the students submittted the assignments, some worried that they had violated the spirit of individual work expected of them in the classroom. But Hart realized that they had, in fact, fulfilled Kingsfield's plan for them. Only by negotiating contracts and exchanging information could they conceivably complete the assignment. In the process, they learned a lot about the law of contracts from the questions they answered, but they had learned even more about the law of contracts by living it. Kingsfield, the master, had done it again.

    So, why this episode? Yesterday I received an e-mail message from one of our grad students, who has read some of my recent entries about helping students to learn new practices. (The most recent strand of this conversation started with this entry.) He proposed:

    ... I wonder if you have considered approaches to teaching that might include assigning projects or problems that are darned-near unsolvable using the methods you see the students using that you wish to change? ... you can't really force anyone to open up or see change if they don't feel there is anything fundamentally wrong with what they are doing now.

    This is an important observation. A big part of how I try to design homework assignments involves finding problems that maximize the value of the skills students have learned in class. Not all assignments, of course; some assignments should offer students an opportunity to practice new ideas in a relatively stress-free context, and others should give them a chance to apply their new knowledge to problems that excite them and are methodology-neutral.

    But assigning problems where new practices really shine through is essential, too. Kingsfield's approach is a form of Extreme Exercise, where students can succeed only if they follow the program laid out in class.

    In university courses on programming and software development, this is a bigger challenge than in a law school. It is Conventional Wisdom that the best programmers are ten or more times more productive than their weakest colleagues. This disparity is perhaps nowhere wider than in the first-year CS classroom, where we have students everywhere along the continuum from extensive extracurricular experience and understanding to just-barely-hanging-on in a major that is more abstract and demanding than previously realized. Kingsfield's approach works better with a more homogeneous student body -- and in an academic environment where the mentality is "sink or swim", and students who fail to get it are encouraged to pursue another profession.

    I still like the idea behind the idea, though, and think I should try to find an exercise that makes, say, test-driven design and refactoring the only real way to proceed. I've certainly done a lot of thinking along these lines in trying to teach functional programming in my Programming Languages course!

    Practical matters of educational psychology confound even our best efforts, though. For example, my reader went on to say:

    If a teacher can come up with scenarios that actually ... require the benefits of one approach over the other, I suspect that all but the most stubborn would be quick to convert.

    In my years as a teacher, I've been repeatedly surprised by the fact that no, they won't. Some will change their majors. Others will cling to old ways at the cost of precious grade points, even in the face of repeated lack of success as a programmer. And, of course, those very best students will find a way to make it work using their existing skills, even at the cost of precious grade points. Really, though, why we penalize students very much for succeeding in a way we didn't anticipate?

    With the image of Professor Kingsfield firmly planted in my mind, I will think more about how the right project could help students by making change the only reasonable option. I could do worse than to be loved for having the wisdom of Kingsfield.

    And, as always, thanks to readers for their e-mail comments. They are an invaluable part of this blog for me!


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    February 10, 2005 5:02 PM

    How's That Workin' For Ya?

    I recently saw a statement somewhere, maybe on the XP mailing list, to the effect that XP doesn't prevent people who will fail from failing; it helps people who will succeed to succeed more comfortably or more efficiently.

    I think that, in one sense, this is true. No software methodology can do magic. People who for whatever reason are unprepared to succeed are unlikely to succeed. And I believe that good developers can do even better when they work in an agile fashion.

    But in another sense I think that XP and the other agile methods can help developers get better at what they do. If a person makes a commitment to improve, then adopting XP's disciplines can become a better developer.

    This all came to the front of my mind yesterday as I read a short essay over at 43 Folders on systems for improving oneself. Merlin's comments on self-improvement systems caused a light bulb to go off: XP is a self-help system! Consider...

    • Successful self-help systems help a person to become more self-aware and to use what they observe to effect a more favorable solution.

    • Successful systems encourage simplicity and communication.

    Agile software methods draw on a desire to get better by paying attention to what the process and code tell us and then feeding that back into the system -- using what we learn to change how we do what we do.

    Practices such as continuous unit testing provide the feedback. The rhythm of the test-code-refactor cycle accentuates the salience of feedback, making it possible for the developer to make small improvements to the program over and over and over again. The agile methods also encourage using feedback from the process to fine-tune the process to the needs of the team, the client, and the project.

    Improvement doesn't happen by magic. The practices support acquiring information and feeding it back into the code.

    A person is more likely to stick with a system if it is simple enough to perform regularly and encourages small corrections.

    Merlin proposes that all successful self-improvement systems embody...

    ... a few basic and seemingly immutable principles:
    • action almost always trumps inaction
    • planning is crucial; even if you don't follow a given plan
    • things are easier to do when you understand why you're doing them
    • your brain likes it when you make things as simple as possible

    That sure sounds like an agile approach to software development. How about this:

    ... the idea basically stays the same: listen critically, reflect honestly, and be circumspect about choosing the parts that comport with your needs, values, and personal history. Above all, remember that the secret code isn't hiding ... -- the secret is to watch your progress and just keep putting one foot in front of the other. Keep remembering to think, and stay focused on achieving modest improvements in whatever you want to change. Small changes stick.

    Any software developer who wants to get better could do much worse than to adopt this mindset.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    February 09, 2005 11:10 AM

    Some Wednesday Potluck

    I've been so busy that writing for the blog has taken a back seat lately. But I have run across plenty of cool quotes and links recently, and some are begging to be shared.

    Why Be a Scientist?

    Jules Henri Poincare said...

    The scientist does not study nature because it is useful. He studies it because he delights in it, and he delights in it because it is beautiful.

    ... as quoted by Arthur Evans and Charles Bellamy in their book An Inordinate Fondness for Beetles.

    How to Get Better

    When asked what advice he would give young musicians, Pat Metheny said:

    I have one kind of stock response that I use, which I feel is really good. And it's "always be the worst guy in every band you're in." If you're the best guy there, you need to be in a different band. And I think that works for almost everything that's out there as well.

    I remember when I first learned this lesson as a high school chessplayer. Hang out with the best players you can find, and learn from them.

    (I ran across this at Chris Morris's cLabs wiki, which has some interesting stuff on software development. I'll have to read more!)

    All Change is Stressful

    Perryn Fowler reminds us:

    All change is stressful - even if we know it's good change. ...

    Those of us who attempt to act as agents of change, whether within other organisations or within our own, could do well to remember it.

    Perryn writes in the context of introducing agile software methods into organizations, but every educator should keep this in mind, too. We are also agents of change.


    Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

    February 04, 2005 1:30 PM

    Honey Rather Than Vinegar

    While reading about the muddled state of Groovy, I ran across a blog post on the topic of trying to get students to adopt new practices. Greg Wilson writes on his blog Pyre:

    It's easy to make students jump through hoops in a course. What's hard is convincing them that jumping through those hoops after the course is over really will make their lives better. The best way I've found so far is to bring in experienced programmers who are doing exciting things, and have them say, "Comments, version control, test-driven development..."

    Earlier in the same entry, he suggests that XP succeeds not because of its particular practices, but rather...

    ... that what really matters is deciding that you want to be a better programmer. If you make a sincere commitment to that, then exactly how you get there is a detail.

    That's spot on with what I said in my last message. Learning happens when a person opens himself to change. That openness makes it possible for the learner to make the commitment to a new behavior. With that commitment, even small changes in practice can grow to large changes in capability. And I certainly concur with Gregg's advice to bring in outsiders who are doing cool things. Some students reach a level of internal motivation in that way that they will never reach through being asked to change on their own.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    February 04, 2005 9:47 AM

    Learning is Change

    I realized yesterday that some of my students are approaching the course with an attitude of changing as little as possible: figure out how to do the assignments without becoming a different kind of programmer, or person. That makes learning a new set of practices, new habits, nearly impossible. They may not be doing it consciously, but I can see it in their behavior. That attitude has a place and time, but in the classroom -- where learning is the presumed goal -- it is an impediment.

    Lecturing on some dry course content and giving exams full of objective questions would be a lot easier...


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    February 03, 2005 4:01 PM

    Small Differences Can Have a Large Effect

    The temperature here has risen to unseasonably high levels the last week or so. That means that I am able to run outdoors again. And I love it -- fresh air and open space are where the great running is. I live where the American Midwest meets its Great Plains, so by most folks' standard the land here is flat. But I live near a river, and we do have gently rising and falling terrain, which makes every run more interesting and more challenging than any track can offer.

    One thing I notice whenever I am able to break away from track running is an increase in the variability of my pace. When I run on a short indoor track, I usually find myself running relatively steady lap times, drawn into a rhythm by the short, repetitive environment.

    Another thing I notice is that tend to run faster than I'd like, even on days I'd rather take it easy. One good result of this is that I get faster, but the negative side effect is that I end up more tired all week long. That affects the other parts of my life, like my teaching and my time with my family.

    You might think that a couple of seconds per lap -- say, 52 second laps instead of 54 -- wouldn't make that much difference. That's less than 4%, right? But a small difference in effort can have a big effect of the body. That small difference compounds at every lap, much like interest in a bank account. What feels comfortable in the moment can be less so far after the fact, when that compounded difference makes itself apparent. There can value be in such stresses ("the only way to get faster is to run faster"), but there are also risks: a depressed immune system, increased susceptibility to injury, and the tiredness I mentioned earlier.

    Most runners learn early to respect small changes and to use them wisely. They learn to mix relatively easy runs and even off days in with their harder runs as a way to protect the body from overuse. Folks who train for a marathon are usually told never to increase their weekly mileage by more than 10% in a given week, and to drop back every second or third week in order to let the body adjust to endurance stress.

    At first, the 10% Rule seems like an inordinate restriction. "At this rate, it will take me forever to get ready for the marathon!" Well, not forever, but it will take a while. Most people don't have any real choice, though. The human body isn't tuned to accepting huge changes in endurance very quickly.

    But their is hope, in the bank account analogy above. You may have heard of the Rule of 72, an old heuristic from accounting that tells us roughly how quickly a balance can double. If a bank account draws 5% interest a year, then the balance will double in roughly 72/5 ~~ 14 years. At 10% interest, it will double in about seven. This is only a heuristic, but the estimates are pretty close to the real numbers.

    Applied to our running, the Rule of 72 reminds us that if we increase our mileage 10% a week, then we can double our mileage in only seven weeks! Throw in a couple of adjustment weeks, and still we can double in 10 weeks or less. And that's at a safe rate of increase that will feel comfortable to most people and protect their bodies from undue risks at the same time. Think about it: Even if you can only jog three miles at a time right now, you could be ready to finish a marathon in roughly 30 weeks! (Most training plans for beginners can get you there faster, so this is really just an upper bound...)

    What does this all have to do with software development? Well, I have been thinking about how to encourage students, especially those in my first-year course, to adopt new habits, such as test-driven design and refactoring. I had hoped that, by introducing these ideas early in their curriculum, they wouldn't be too set in their ways yet, with old habits too deeply ingrained yet. But even as second-semester programmers, many of them seem deeply wedded to how they program now. Of course, programmers and students are people, too, so they bring with them cognitive habits from other courses and other subjects, and these habits interact with new habits we'd like them to learn. (Think deeply about the problem. Write the entire program from scratch. Type it in. Compile it. Run it. Submit it. Repeat.)

    How can I help them adopt new practices? The XP mailing list discusses this problem all the time, with occasional new ideas and frequent reminders that people don't change easily. Keith Ray recently posted a short essay with links to some experience reports on incremental versus wholesale adoption of XP. I've been focusing on incremental change for the most part, due to the limits of my control over students' motivation and behavior.

    The 10% Rule is an incremental strategy. The Rule of 72 shows that such small changes can add up to large effects quickly.

    If students spends 10 minutes refactoring on the first day, and then add 10% each subsequent day, they could double their refactoring time in a week! Pretty soon, refactoring will feel natural, a part of the test-code-refactor rhythm, and they won't need to watch the clock any more.

    I'm not sure how to use this approach with testing. So far, I've just started with small exercises and made them a bit larger as time passed, so that the number of tests needed has grown slowly. But I know that many still write their tests after they think they are done with the assignment. I shouldn't complain -- at least they have tests now, whereas before they had none. And the tests support refactoring. But I'd like to help them see the value in writing the tests sooner, even first.

    Together, the 10% Rule and the Rule of 72 can result in big gains when the developer commits to a new practice in a disciplined way. Without commitment, change may well never happen. A runner who doesn't run enough miles, somehow increasing stamina and strength, isn't likely to make to the level of a marathon. That discipline is essential. The 10% Rule offers a safe and steady path forward, counting on the Rule of 72 to accumulate effects quicker than you might realize.


    Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

    February 01, 2005 5:03 PM

    A Touch of Greatness

    Recently I wrote about teacher Al Cullum, motivated by Rich Pattis's recommendation of the recent PBS movie about his teaching career and ideas. I finally finished Cullum's Push Back the Desks, the 1967 Citation Press book that introduced his teaching style to a wide audience. It's a light and easy read, but I kept slowing down to jot notes of the ideas that Cullum's ideas caused in my mind.

    The Techniques

    The book is organized around chapters that present one of Cullum's exercises for "pushing back the desks" and creating an active world of learning for his elementary school students. To help folks in all subject areas, he describes how his techniques can be applied across the curriculum. Some are not too surprising, such as writing a newspaper about a topic or putting on class plays. These are standard techniques in schools today.

    I recognized a couple of Cullum's techniques as pedagogical patterns sometimes used in college CS classrooms:

    • Arithmetic Digs, which resemble exercises in which an instructor passes out a program and has the class try to find answers to specific questions about it or errors in the code.

    • Book Blab, which resemble code walkthroughs in which the class or groups of students read and discuss a program. Of course, a Book Blab about a novel might take more than a single class period, but then again if the program you were discussing were QMail, so might your code walkthrough!

    Other of Cullum's techniques sounded new to me. For example, he had his students choose a U.S. president, write his inauguration speech based on what he actually did as president, and deliver the speech to the class, which acted as the Congress by voting thumbs up or thumbs down on the president's agenda. He also described Geography Launchings, the Poetry Pot, the Longfellow Lab, and the Newspaper Quiz (interesting for being a race mostly against oneself).

    The technique I am most likely to use this semester is the Pulitzer Prize. At the beginning of the school year, Cullum announced that students could enter one of their written works in a variety of genres (essay, poem, and so on) into a an award competition a lá the Pulitzer Prizes. Participation was fully voluntary, and students could enter any work they wished, even one they wrote outside of class. He would work with students to help them improve their work throughout the year, to the degree they requested. At the end of the year, Cullum selected zero or more winners in each genre, based on the number and quality of the entrants. This might make a nice way to encourage students in a programming class to go beyond an assignment and its time constraints to craft a fine program. The genres could be things like Assigned Program, Freelance Program, Test Class, GUI Program, Text-based Program and so on.

    Another neat idea for a CS course is the Renoir Room, in which students found, studied, and discussed the works of the famous pointer. How about a CS course built around the great works of one of our great artists: classical artists such as Knuth or McCarthy, or even postmodern artists such as Larry Wall?

    The real joy in this book is not the techniques themselves but the rather the spirit Cullum brings to his classroom.

    The Philosophy

    The fulcrum idea in Cullum's philosophy is a touch of greatness. Teach by exposing students to greatness, letting them respond to it, and then learn out of their own motivation to live in the presence of greatness. And Cullum doesn't mean "artificial greatness" created "at grade level" for students. He means Shakespeare, Longfellow, Chaucer, Dickinson, and even Big Ideas from math and science. (Do you note a recurring theme?)

    Here are some of the quotes that I wanted to remember as inspiration. The emphasis is mine.

    I have found that children are interested in two things-- doing and doing now. Children are looking for the unexpected, not the safe; children are looking for noise and laughter, not quite; children are looking for the heroic, not the common. [15]

    Sadly our K-12 educational system tends to beat this energy out of college students long before they get to us. But I didn't think the fire has been extinguished, only masked. With some encouragement, and evidence that the instructor is serious about having real fun and learning in the classroom, most of my students seem to open up nicely. I've been most successful doing this in my algorithms course and my now-defunct user interface/professional responsibility course.

    When I first began teaching, there was Al Cullum the teacher and Al Cullum the person. I soon discovered that this split personality was not a healthy one for the children or for me. I realized I had better bury Al Cullum the teacher and present Al Cullum the person, or else the school year would become monotonous months of trivia. I began to share with my students my moments of joy, my moments of love, my moments of scholarship, and even my uncertainties. [19]

    That last sentence reminds me of a quote I read on someone else's blog page, about how the honest teacher presents his students with an honest picture of him or herself, as a scholar who doesn't know much but who searches for understanding. In my experience, students respond to this sort of honesty with openness and honesty of their own. Exposing the "real person" to students requires a certain kind of confidence in a teacher, but I think that such confidence is a habit that can be developed with practice. Don't wait to become confident. Be confident.

    Many times as a beginning teacher I used to say to my classes:
       "I insist you write complete sentences!"
       ...
       "How can you possibly forget to put in the period?"
       ...
       "This composition is too short!"
       "This composition is too long!"
       ... One day I heard the echoes of my admonitions, and in an embarrassed fashion I asked myself, "What have you written lately?"

    Programming teachers must write programs. It gives them the context within which to teach, and to learn with their students. Otherwise, instruction becomes nothing but surface information from the textbook du jour, with no reality for students. Plus, how can we stay alive as teachers of programming if we don't write programs?

    Writing our own programs also reminds us what's hard for our students, so that we can better help them overcome the real obstacles they face rather than the artificial ones we create in our minds and in our theories.

    Good schools introduce students to as many new worlds as possible. [55]

    A love of reading is developed through students and teachers sharing what they have read. [72]

    We can help our students develop a love of reading programs in the same way. I think that a love of writing programs grows in a similar fashion.

    Once a teacher loses the feeling of doing something for the first time, it is time for the teacher to change grades, schools, or professions.

    I've always asked to teach different courses every few semesters, for just this reason. If I teach any course for too long, it becomes stale, because I become stale. When I must, as is the case with our introductory OOP course, then I have to throw myself a change-up occasionally: change language, or program themes, or examples, or development style. Almost anything to keep the course fresh. (This semester, I have yet to reuse a line from from voluminous OOP course notes. I feel more alive (and on edge!) each day than I've felt in a long while.

    I haven't heard of a student who died from being challenged too much, but I've heard of many who wasted away from boredom. [84]

    This may be true in spirit, but students can be challenged too much. They need to have enough background, both in content and style, before they can rise to meet challenging problems. Hitting them too hard too soon is a recipe for revolt or, worse, desertion. (Those are especially unattractive outcomes at a time of falling enrollments nationwide.)

    There are two aspects of every classroom -- the students and the teacher! Both need to be touched by greatness. Students seek the mystery and magic of school that was there the first day they entered the hallowed halls. Give them the magic again ... [84]

    Be yourself. Dare to experiment. Be touched by greatness yourself. Live in the creative act, too.

    Many eyes were moist; I knew mine were partly because of Longfellow's poetic gifts and partly because of one hundred and one students who had confirmed my opinion that they had the ability and nobility to accept a touch of greatness. There silence at the end of the poem ... told me that school life need not be routine, dull, or one long series of learning basic skills. We teachers must reach the hearts of children before they are impressed with our basic skills. [150]

    Wow. I've known that feeling rarely, but it is magical. An algorithms session when a game or puzzle opens students' minds and hearts and they sense a greatness in the solution. An OOP session when a group of students clicks with, say, the Decorator pattern. Or when a programming languages class clicks with the idea of higher-order functions. A rare and beautiful feeling.

    I have a chance to reach this elusive state only when I create a suitable environment, in which a great idea is front and center and the student is motivated and challenged. A lot of it is luck, but as they say, luck favors the prepared.

    [M]essy classrooms are perfectly natural if something is happening in that room. If dreams of greatness are to be fulfilled ..., I don't see how teachers can avoid having messy classrooms. Sometimes a neat classroom is a bore. [187]

    By this Cullum means the elementary classroom: paper scraps, glue, paint, easels, costumes, cloth scraps. But the college classroom should be messy, too, at least for a while: an intellectual mess, as ideas are being formed, and re-formed, extended and applied. A class gang-writing a program a lá XP will be "messy" for a few days, until the product takes shape and the program is refactored and the code makes us find and use a big idea (say, an interface) that resolves the mess.

    In Conclusion

    While reading this book, my wife (a former elementary school teacher) and I both commented that this book poses a serious challenge for our elementary schools as they are right now. This approach requires real knowledge and love of the subject area. Someone teaching science only because the school needs a 5-6 science teacher will have a hard time sharing an abiding love for science and the scientific method. This approach also requires great confidence in one's knowledge and teaching skills.

    But great teachers exist. We've all had them. And someone who sincerely wants to have a great classroom can develop the right habits for getting better.

    Cullum's philosophy can be Just as tough to apply in a college classroom. A lecture section of 200 students. Picking up a new class in a new area, one that extends an instructor beyond the core of his or her expertise, because the old instructor retired or moved. But it can work. I eagerly sought out our programming languages courses and our algorithms course as a means for me to "go deep" and cultivate my love and knowledge of these areas. I could never have taught them if I hadn't wanted to touch their greatness, because I would have bored myself -- and my students -- to death, and killed everyone's spirit in the process. By no means am I a great teacher in these areas yet, but I do think I'm on the right path.

    A touch of greatness. I think that's what conferences like OOPSLA do most for me: let me re-connect with the greatness of what I do and think about. Now, how can I make that feeling available to my students...


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    January 28, 2005 4:43 PM

    Why Didn't I Know This Already?

    The little things I have left to learn... Twice today I encountered something about Java that I didn't know.

    First: a colleague had a piece of code with three variables typed as bytes:

    piecesRemaining = piecesAtStart - piecesTaken;

    The compiler complained about a loss of precision. When he cast the righthand side of the expression to byte, everything worked fine:

    piecesRemaining = (byte) (piecesAtStart - piecesTaken);

    So off I trudge to the Java Language Reference, where on Page 27 I find:

    Java performs all integer arithmetic using int or long operations. A value that is of type byte, short, or char is widened to an int or a long before the arithmetic operation is performed.

    So your RHS value are upcast to ints for the subtraction, resulting in a downcast on assignment. But why does Java work this way? Any ideas?

    Second: I'm teaching an undergraduate algorithms course this year for the first time in a long while, and I often find myself whipping up a quick program in Scheme or Java to demonstrate some algorithms we've designed in class. This afternoon I was implementing a brute-force algorithm for the old Gas Station Problem:

    A cyclic road contains n gas stations placed at various points along the route. Each station has some number of gallons of gas available. Some stations have more gas than necessary to get to the next station, but other stations do not have enough gas to get to the next station. However, the total amount of gas at the n stations is exactly enough to carry a car around the route exactly once.

    Your task: Find a station at which a driver can begin with an empty tank and drive all the way around the road without ever running out of gas.

    The input to your algorithm is a list of n integers, one for each station. The ith integer indicates the number of miles a car can travel on the gas available at the ith station.

    For simplicity, let's assume that stations are 10 miles apart. This means that the sum of the n integers will be exactly 10n.

    The output of your algorithm should be the index of the station at which the driver should begin.

    Our algorithm looked like this:

        for i ← 1 to n do
            tank ← 0
            for j ← i to ((i + n) mod n) do
                tank := tank + gas[j] - 10
                if tank < 0 then next i
            return i
    

    How to do that next i thing? I don't know. So off again I go to the language reference to learn about labels and the continue statement. Now, to be honest, I knew about both of these, but I had never used them together or even thought to do so. But it works:

        outer:
        for ( int i = 0; i < gas.length; i++ )
        {
            tank = 0;
            for ( int j = i; j < i + gas.length; j++ )
            {
                tank = tank + gas[j % gas.length] - 10;
                if ( tank < 0 )
                    continue outer;
            }
            return i;
         }
    

    Some days I have major epiphanies, but more often I learn some little nugget that fills an unexpected hole in the web of my knowledge. The Java gurus among you are probably thinking, "He really didn't know that? Sheesh..."

    Neither one of these Java nuggets is earth-shaking, just the sort of details I only learn when I need to use them in program for the first time. I suppose I could study the Java Language Reference in my free time, but I don't have the sort of job for which that sounds attractive. (Is there such a job?)


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    January 28, 2005 9:49 AM

    Programming as Performance Art

    I've been following the development of Laurent Bossavit's software development dojo. He wrote about the idea in his blog a few times last year and then created a separate blog for it at this month at http://bossavit.com/dojo/.

    The most recent article there speculates about programming as a performance art. It seems that the guys in the dojo have been presenting their code to one another and doing some of the programming live in front of the group. The experience has caused the author to re-think his artistic sense of programming:

    Had I to choose earlier an art form which I felt connected to the act of programming, I would have picked book writing without hesitation. Something static, written at some time, read at another. Few dojo sessions later, I am not so positive anymore. I speculate the act of programming is also about the here and the now: how you currently live through the problem up to a satisfying conclusion, and how I feel engaged, watching your sharing techniques and insights. No cathartic experience so far -- hold your horses, this is sill embryonic stage -- although this could become a personal quest.

    I feel a kinship here after the first three week of my CS II class, in which my students and I have been "gang programming" Ron Jeffries's bowling game using TDD. I've spent very little time expounding ex cathedra; most of my remarks have been made in the flow of programming, with only an occasional impromptu mini-lecture of a few minutes. If you know me, you probably don't think of me as a natural performer, but I've felt myself slipping into the gentle drama of programming. New requirements puzzle us, then worry us. A test fails. Programming ideas compete for attention. Code grows and shrinks. Green bar. Relief.

    I hope that most of the students in the class are getting a sense of what it's like to make software. I also hope that they are enjoying the low drama of course.

    My recent experience has brought to mind my favorite article on this topic, Bill Wake's Programming as Performance Art. He tells of seeing a particularly stirring performance by Kent Beck and Jim Newkirk:

    They tackled the chestnut "Sieve of Eratosthenes." The 4-hand arrangement worked well, and let us see not only "programmer against complexity" but also "programmer against self" (when they started to make mistakes) and "programmer against programmer" (when they resolved their differences). The audience appreciated the show, but we should have applauded more enthusiastically.

    I don't know what I'd do if my students ever applauded. But even if they never do, I like the feel of our live programming sessions. We'll see how effective they are as learning episodes as the semester progresses.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    January 24, 2005 10:01 AM

    I Go To Extremes

    Brian Button recently posted a long blog entry on his extreme refactoring of the video store class from Martin Fowler's Refactoring. Brian "turned [his] refactoring knob up to about 12" to see what would happen. The result was extreme, indeed: no duplication, even in loops, and methods of one line only. He concluded, "I'm not sure that I would ever go this far in real life, but it is nice to know that I could."

    Turning the knobs up in this way is a great way to learn something new or to plumb for deeper understanding of something you already know. A few months ago, I wrote of a similar approach to deepening one's understanding off polymorphism and dynamic dispatch via a simple etude: Write a particular program with a budget of n if-statements or less, for some small value of n. As n approaches 0, most programmers -- especially folks for whom procedural thinking is still the first line of attack -- find this exercise increasingly challenging.

    Back at ChiliPLoP 2003, we had a neat discussion along these lines. We were working with some ideas from Karel the Robot, and someone suggested that certain if-statements are next to impossible to get rid of. For example, How can we avoid this if?

         public class RedFollowingRobot
         {
             public void move()
             {
                 if (nextSquare().isRed())
                     super.move();
             }
         }
    

    public class Square { public boolean isRed() { ... } }

    But it's possible... Joe Bergin suggested a solution using the Strategy pattern, and I offered a solution using simple dynamic dispatch. You can see our solutions at this wiki page. Like Brian said in his article, I'm not sure that I would ever go this far in real life, but it is nice to know that I could. This is the essence of the powerful Three Bears pattern: Often, in taking an idea too far, we learn best the bounds of its application.

    Joe and I like this idea so much that we are teaching a workshop based on it, called The Polymorphism Challenge (see #33 there) at SIGCSE next month. We hope that spending a few hours doing "extreme polymorphism" will help some CS educators solidify their OO thinking skills and get away from the procedural way of thinking that is for many of them the way they see all computational problems. If you have any procedural code that you think simply must use a bunch of if-statements, send it to me, and Joe and I will see if it makes a good exercise for our workshop!


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

    January 21, 2005 1:09 PM

    Get Ready for OOPSLA 2005

    Back in October, I signed on for a second year as program chair for the OOPSLA Educators Symposium. Last year's symposium went well, with Alan Kay's keynote setting an inspirational mood for the symposium and conference, and I welcomed the chance to help put together another useful symposium for OO eductaors. Besides, the chance to work with Ralph Johnson and Dick Gabriel on their vision for an evolving OOPSLA was too much to pass up.

    The calls for submissions to OOPSLA 2005 are available on-line now. All of the traditional options are available again, but what's really exciting are the new tracks: Essays, Lightning Talks, Software Studies, and FilmFest. Essays and Lightning Talks provide a new way to start serious conversations about programming, languages, systems, and applications, short of the academic research papers found in the usual technical program. I think the Software Studies track will open a new avenue in the empirical study of programs and their design. Check these CFPs out.

    Of course, I strongly encourage all of you folks who teach OOP to consider submitting a paper, panel proposal, or nifty assignment to the Educators Symposium. As much as software development, teaching well depends on folks sharing successful techniques, results of experiments, and patterns of instructional materials.

    This year, the symposium is honored to have Ward Cunningham as our keynote speaker. As reported here, Ward spoke at last year's OOPSLA. I love Ward's approach to programming, people, and tools. "OOPSLA is about programming on the edge, doing practical things better than most." "Be receptive to discovery." The symposium's program committee is looking forward to Ward sharing his perspective with educators.

    If you've been to OOPSLA before, you know how valuable this conference can be to your professinal development. If not, ask someone who has been, or trust me and try it yourself. You won't be disappointed.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    January 18, 2005 5:40 AM

    The Geranium on the Window Sill Just Died...

    On his dinner conversation mailing list, Rich Pattis recently recommended the PBS movie A Touch of Greatness, about the teacher Albert Cullum. After attempting a career as a stage actor, Cullum began teaching in an elementary school in the late 1940s. He used poetry and drama as integral parts of his classroom, where children were encouraged to learn through productive work, not by listening to him. He believed that children have a natural affinity for great ideas and so introduced them to classic literature and real science immediately.

    I missed the PBS showing of the film and may not get around to seeing it any time soon, so I grabbed a couple of his books from the library. The book that explains his teaching approach best looks to be Push Back the Desks. I just started it last night. Over the weekend, though, I read his children's-book-for-adults, The Geranium on the Window Sill Just Died But teacher Went Right On. I call it that because it is written just like a book for kindergarteners (one page of picture for each partial page of text, written with short sentences in a young student's voice) but is clearly aimed at adults. Indeed, the book is "dedicated to all those grownups who, as children, died in the arms of compulsory education".

    Cullum clearly thinks that education is something quite different from what usually goes on in our schoolrooms. (Does that sound familiar?) He has a point. Though I've certainly thought a lot about the shortcomings of compulsory education -- especially since my daughters reached school age, I'd never thought about all the little ways in which school can dehumanize and demotivate the wonderful little minds that our children have.

    Two passages from the book really struck a chord with me. In the first, a small child asks the teacher if she likes his picture of a cherry tree. The students has colored the cherries red and the leaves green, going against the instructions. "But, teacher, don't you see my rainbow? Don't you see all the colors? Don't you see me?"

    The second comes from a student who has learned that he is not good at anything in school:

    I was good at everything
    --honest, everything!--
    until I started being here with you.
    I was good at laughing,
    playing dead,
    being king!
    Yeah, I was good at everything!
    But now I'm only good at everything
    on Saturdays and Sundays...

    What a deep feeling of resignation this child must feel to find that what matters to him doesn't matter, and that he isn't good at what matters. This is what we tell students when we place the focus on what they don't know rather than on what more is out there. When we tell them that their own identity is less important than our vision of who they should be.

    Don't get me wrong. I don't believe in the Noble Savage theory about children, even children as learners. I don't think that a child's self-esteem can be made the centerpiece of all education. The child who colored the cherry tree backwards needs to learn to follow instruction. To suggest that this isn't an important skill to learn in school is to undermine an essential role in learning and thinking. And I don't believe that the second child was good at everything before he came to school. He may not have been good at much. But he felt like he controlled his enjoyment of the universe, at least in ways that mattered to him. He needs to learn that there is more to life than just laughing, playing dead, and being king. There are even times and places when those things aren't very important.

    But he doesn't need to learn that those things don't matter. And he doesn't need to learn that individuality isn't important. He simply needs to learn that the world is even bigger than his wonderful little mind knows just yet, and that he can do more and better things in this big world. Education is about expanding the child's mind, not limiting it. It should build on who the child is, not who the teacher is. It should increase the wonder, not extinguish it.

    Being this sort of teacher requires that most of us unlearn a lot of habits learned by example. Fortunately, most of us have had at least one teacher who inspired us in the ways Cullum suggests. I can think of many such teachers throughout my many years of education (Mrs. Brand, Mrs. Bell, Mr. Zemelko, Mr. Rickett, Mr. Smith, Dr. McGrath, and Dr. Stockman, to name a few). Even still, other habits formed -- and die hard, if at all, and only then after persevering. This is one of those situations in which I have some idea of what the right thing to do is, even if I don't do it very well yet.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    January 17, 2005 9:19 AM

    Csikszentmihalyi to Visit UNI

    I recently learned that Mihaly Csikszentmihalyi will be visit my campus for several talks in early March. Csikszentmihalyi (whose name is pronounced 'me-HI chick-SENT-me-hi') is well known for his theory of flow and its role in creativity and learning. Long ago I read some of Csikszentmihalyi's work, though I've never read the bestseller that made his name, Flow: The Psychology of Optimal Experience. Maybe now is the time.

    Throughout his career, Csikszentmihalyi has studied the psychology of the creative individual: What characteristics are common to creative individual? How do creative people create? His talks here, though, will focus on the context in which creative individuals thrive. His main talk will describe how to create the sort of environment that must exist in order for a creative individual to contribute most effectively. His other talks will focus on specific contexts: How can educators "design curricula that capitalize on the human need to experience flow?" How work can contribute to well-being, and how can managers and employees create the right sort of workplace?

    We agile folks often speak of "rhythm" and how TDD and refactoring can create a highly productive flow. And, while I've often heard critics say that P limits the individuality of developers, I've always thought that agile methods can unleash creativity. I'm curious to hear what Csikszentmihalyi has to say about flow and how we can create environments that support and nurture it, both for software development and for learning. You may recall an earlier discussion here about flow in learning, inspired by the Suzuki method of instruction and by Alan Kay's talks at OOPSLA. I think Csikszentmihalyi can help me to understand these ideas better.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    January 14, 2005 4:07 PM

    Bowling for CS II

    I took a leap of faith this week.

    I teach our second course, CS II. It comes between an intro course in which students learn a little bit about Java and a little bit about objects, and a traditional data structures course using Ada95. The purpose of CS II is to give students a deeper grounding in Java and object-oriented programming.

    The first thing I did on our first day of class was to write code. The first line of code I wrote was a test. The next thing I did was try to compile it, which failed, so I wrote the code to make it compile. Then I ran the test in JUnit and watched it fail. Then we wrote the code to make it pass.

    And then we did it again, and again, and again.

    I've long emphasized various agile practices in my courses, even as early as CS II, especially short iterations and refactoring. In recent semesters I've encouraged it more and more. Soon after fall semester ended, I decided to cross the rubicon, so to speak. Let tests drive all of our code. Let students feel agile development from all class and lab work.

    Day 1 went well. We "gang-programmed", with students helping me to order requirements, write tests, find typos, and think through application logic. The students responded well to the unexpected overhead of JUnit. Coming out of the first course, they aren't used to non-java.* or extra "stuff".

    In our second session, a lab class, students installed JUnit in their own accounts and wrote their first Java code of the semester: a JUnit test against the app we built in Day 1.

    Oh, the app? Ron Jeffries' BowlingGame etude. It had just enough complexity to challenge students but not enough to pull too much of their attention away from the strange and wonderful new way we were building the program.

    Next time, we'll add a couple of requirements that lead us to factor out a Frame, allowing us to see refactoring and how tests support changing their code.

    I'm excited even as I'm a little on edge. New things in the classroom do that to me.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    January 06, 2005 3:03 PM

    Looking Under the Hood to Be a Better Programmer

    Brian Marick just blogged on Joel Spolsky's Advice for Computer Science College Students. I had the beginnings of an entry on Joel's stuff sitting in my "to blog" folder, but was planning to put it off until another day. (Hey, this is the last Thursday I'll be able to go home early for sixteen weeks...) But now I feel like saying something.

    Brian's entry points that Joel's advice reflects a change in the world of programming from back when we old fogies studied computer science in school: C now counts as close enough to the hardware to be where students should learn how to write efficient programs. (Brian has more to say. Read it.)

    I began to think again about Joel's advice when I read the article linked above, but it wasn't the first time I'd thought about it. In fact, I had a strong sense of deja vu. I looked back and found another article by Joel, on leaky abstractions, and then another, called Back to Basics. There is is a long-term theme running through Joel's articles, that programmers must understand both the abstractions they deal in and how these abstractions are implemented. In some ways, his repeated references to C are mostly pragmatic; C is the lingua franca at the lowest level of software development, even -- Brian mentions -- for those of us who prefer to traffic in Ruby, Smalltalk, Perl or Python. But C isn't the key idea Joel is making. I think that this, from the leaky abstractions, article, is [emphasis added]:

    The law of leaky abstractions means that whenever somebody comes up with a wizzy new ... tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying "learn how to do it manually first, then use the wizzy tool to save time." [...] tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don't save us time learning.

    And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder.

    Programming has always been hard, but it gets harder when we move up a level in abstraction, because now we have to worry about the interactions between the levels. Joel's article argues us that it's impossible to create an abstraction that doesn't leak. I'm not sure I willing to believe it's impossible just yet, but I do believe that it's close enough for us to act as if it is.

    That said, Brian's history lesson offers some hope that that the difficulty of programming isn't growing harder at an increasing rate, because sometimes what counts as the lowest level rises. C compilers really are good enough these days that we don't have to learn assembly in order to appreciate what's happening at the machine level. Or do we?

    Oh, and I second Brian's recommendation to read Joel's Advice for Computer Science College Students. He may just be one of those old fogies "apt to say goofy, antediluvian things", but I think he's spot on with his seven suggestions (even though I think a few of his reasons are goofy). And that includes the one about microeconomics!


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    January 04, 2005 9:47 AM

    The Passions of Students and Teachers

    One of my goals this year is to not let paper bury me. So when the latest issue of Communications of the ACM hit my doorstep, I decided to deal with it in real-time. One article grabbed my attention for its protagonists: David Kestenbaum's "The Challenges of IDC: What Have We Learned From Our Past?", which is an excerpt of a panel at IDC 2004 that included Seymour Papert, Marvin Minsky, and Alan Kay.

    Someone in the audience asked these pioneers of interaction design for children to comment on how their ideas could be framed in terms of "people's everyday realities", because people understand things better when you express them in terms of everyday experiences. I think all three responses have something to teach us about teaching, even at the university level. (Emphasis added by me.)

    Papert:

    You learn things better when they are connected with things you are passionate about. Connecting math to things that you don't care about doesn't help even if they belong to everyday life. ... What counts to you is your mental world of interests, dreams, and fantasies, which are often very far removed from everyday life. The key educational task is to make connections between powerful ideas and passionate interests...

    Minsky:

    The most important thing in learning is copying how other people think. I don't think learning by doing really gets one to emulate how other people think. ... We need a cultural situation where every child has an adult friend whom they can emulate. And communicate their ways of thinking to the child. Do something that gets each child to spend a few hours a month with someone worth copying. What we do now is to take a six year old and send him in a room full of six year olds. The result is that every person grows up with the cognitive capability of a five year old.

    Kay:

    I completely agree. I go to a music camp in the summer. What you see there are people with different abilities playing in the presence of master players. The camp doesn't accept coaches that won't play in the concert. Imagine a fifth-grade teacher assigning a composition and actually writing one herself. Shocking! What teachers do is broadcast in every way possible that "I'm not interested in this at all because I don't do it." I think it's unthinkable to teach six year olds to be six year olds. You need to have these models. It's like grad school. You go there to find professors that are more like you'd like to be and try to understand the way they think.

    These answers embody two key ideas, one I'm conscious of most of the time and one that challenges me to do better:

    1. Students prefer to learn from teachers who do the thing they teach. A programming teacher who doesn't program will lose students' attention as soon as he reveals through attitude or action that he doesn't care enough about the stuff to actually write programs himself. Besides, a programming teacher who doesn't program will almost by necessity be teaching programming out of a book, which isn't where the interesting part of programming happens!

    2. Students learn from their passions, not from any old connection to the "real world". I suppose I know this, but it's easy for me as an instructor to make a connection to some real-world scenario -- even a good one -- only to find students nonplussed. Usually, I've made a connection to one of my passions. The key is to connect to *their* passions. But that's much harder! Not to mention the fact that each student's passion is unique to her life and situation. This makes all the more important what Alan Kay said at OOPSLA about the power of 650 examples.

    In at least one respect, getting better at teaching software development is no different from getting better at software development itself: It requires constant attention to practice and a willingness to change practice. And in the case of Alan's 650 examples, it requires teamwork.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    January 03, 2005 11:56 AM

    Name It

    Happy New Year!

    I've run into the same idea twice in the last couple of weeks, in two different guises. I take that as a hint from the universe that I should be thinking about this as I prepare for the upcoming semester. The idea is: Name it.

    First, at a public event during Advent, a friend was talking about how to raise strong, self-aware children. She said, "You know when your children do something good. Name it for them." Children should know that what they are doing has a name. Names are powerful for thinking and for communication. When an act has a name, the act *means* something.

    Then I was ran across a column by Ron Rolheiser that comes at the issue from the other side, from where our feeling doesn't seem as good. The article is called Naming Our Restlessness and talks about how he felt after entering his religious order as a young man of 17. He was restless and wondered if that was a sign that he should be doing something else. But then someone told him that he was merely restless.

    His simple, honest naming of what we were feeling introduced us to ourselves. We were still restless, but now we felt better, normal, and healthy again. A symptom suffers less when it knows where it belongs.

    Often, people can better accept their condition when it has a name. Knowing that what they feel is normal, normal enough to have a name and be a part of the normal conversation, frees a person from the fear of not knowing what's wrong with them. Sometimes, there's nothing wrong with you! And even when there is, when the name tells us that something horrible is wrong, even then a name carries great power. Now we know what is wrong. If there is something we can do to fix the problem, we have to know what the problem is. And even when there is nothing we can do to fix the problem, yes, even then, a name can bring peace. "Now I know."

    What does this mean for teaching? Name when students do something good. If they discover a pattern, tell them the name. When they adopt a practice that makes them better programmers, tell them the name of the practice. If they do something that isn't so good, or when they are uneasy about something in the course, name that, too, so that they can go about the business of figuring out what to do next. And help them figure it out!

    I think this is a sign for me to write some patterns this semester...


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    December 27, 2004 11:19 AM

    Dispatches from the Programmer Liberation Front

    Owen Astrachan pointed me in the direction of Jonathan Edward's new blog. I first encountered Edwards's work at OOPSLA 2004, where he gave a talk in the Onward! track on his example-driven programming tool.

    In his initial blog entries, Edwards introduces readers to the programming revolution in which he would like to participate, away from more powerful and more elegant languages and toward a programming for the Everyman, where we build tools and models that fit the way the real programmer's mind works. His Onward! talk demoed a initial attempt at a tool of the sort, in which the programmer gives examples of the desired computation and the examples become the program.

    Edwards's current position stands in stark contrast to his earlier views as a more traditional researcher in programming languages. As many of us are coming to see, though, "programming is about learning to work effectively in the face of overwhelming complexity" than it is about ever more clever programming languages and compiler tricks. When it comes to taming the complexity inherent in large software, "simplicity, flexibility, and usability are more effective than cleverness and elegance."

    The recent trend toward agile development methodologies and programming support tools such as JUnit and Eclipse also draw their inspiration from a desire for simpler and more flexible programs. Most programmers -- and for Edwards, this includes even the brightest -- don't work very well with abstractions. We have to spend a lot of brain power managing the models we have of our software, models that range from execution on a standard von Neumann architecture up to the most abstract designs our languages will allow. Agile methods such as XP aim to keep programmers' minds on the concrete embodiment of a program, with a focus on building supple code that adapts to changes in our understanding of the problem as we code. Edwards even uses one of Kent Beck's old metaphors that is now fundamental to the agile mindset: Listen carefully to what our code is telling us.

    But agile methods don't go quite as far as Edwards seems to encourage. They don't preclude the use of abstract language mechanisms such as closures or higher-order procedures, or the use of a language such as Haskell, with its "excessive mathematical abstraction". I can certainly use agile methods when programming in Lisp or Smalltalk or even Haskell, and in those languages closures and higher-order procedures and type inference would be natural linguistic constructs to use. I don't think that Edwards is saying such things are in and of themselves bad, but only that they are a symptom of a mindset prone to drowning programmers in the sort of abstractions that distract them from what they really need in order to address complexity. Abstraction is a siren to programmers, especially to us academic types, and one that is ultimately ill-suited as a universal tool for tackling complexity. Richard Gabriel told us that years ago in Patterns of Software (pdf).

    I am sympathetic to Edwards's goals and rationale. And, while I may well be the sort of person he could recruit into the revolution, I'm still in the midst of my own evolution from language maven to tool maven. Oliver Steele coined those terms, as near as a I can tell, in his article The IDE Divide. Like many academics, I've always been prone to learn yet another cool language rather than "go deep" with a tool like emacs or Eclipse. But then it's been a long time since slogging code was my full-time job, when using a relatively fixed base of language to construct a large body of software was my primary concern. I still love to learn a Scheme or a Haskell or a Ruby or a Groovy (or maybe Steele's own Laszlo) to see what new elegant ideas I can find there. Usually I then look to see how those ideas can inform my programming in the language where I do most of my work, these days Java, or in the courses where I do most of my work.

    I don't know where I'll ultimately end up on the continuum between language and tool mavens, though I think the shift I've been undergoing for the last few years has taken me to an interesting place and I don't think I'm done yet. A year spent in the trenches might have a big effect on me.

    As I read Edwards's stuff, and re-read Steele's, a few other thoughts struck me:

    • In his piece on the future of programming, Edwards says,

      I retain a romantic belief in the potential of scientific revolution ... that there is a "Calculus of Programming" waiting to be discovered, which will ... revolutionize the way we program....

      (The analogy is to the invention of the calculus, which revolutionized the discipline of physics.) I share this romantic view, though my thoughts have been with the idea of a pattern language of programs. This is a different sort of 'language' than Edwards means when he speaks of a calculus of programs, but both types of language would provide a new vocabulary for talking about -- and building -- software.

    • Later in the same piece, Edwards says,

      Copy & paste is ubiquitous, despite universal condemnation. ... I propose to decriminalize copy & paste, and even to elevate it into the central mechanism of programming.

      Contrary to standard pedagogy, I tell my students that it's okay to copy and paste. Indeed, I encourage it -- so long as they take the time after "making it work" to make it right. This means refactoring to eliminate duplication, among other things. Some students find this to be heresy, or nearly so, which speaks to how well some of their previous programming instructors have drilled this wonderful little practice out of them. Others take to the notion quite nicely but, under the time pressures that school creates for them and that their own programming practices exacerbate, have a hard time devoting sufficient energy to the refactoring part of the process. The result is just what makes copy and paste so dangerous: a big ball of mud with all sorts of duplicated code.

      Certainly, copy and paste is a central mechanism of doing the simplest thing that could possibly work. The agile methods generally suggest that we then look for ways to eliminate duplication. Perhaps Edwards would suggest that we look for ways to leave the new code as our next example.

    • At the end of the same piece, Edwards proposes an idea I've never seen before: the establishment of "something like a teaching hospital" in which to develop this new way of programming. What a marvelous analogy!

      Back when I was a freshman architecture major, I saw more advanced students go out on charrette. This exercise had the class go on site, say, a road trip to a small town, to work as a group to design a solution to a problem facing the folks there, say, a new public activity center, under the supervision of their instructors, who were themselves licensed architects. Charrette was a way for students to gain experience working on a real problem for real clients, who might then use the solution in lieu of paying a professional firm for a solution that wasn't likely to be a whole lot better.

      Software engineering courses often play a similar role in undergraduate computer science programs. But they usually miss out on a couple of features of a charrette, not the least of which is the context provided by going to the client site and immersing the team in the problem.

      A software institute that worked like a teaching hospital could provide a more authentic experience for students and researchers exploring new ways to build software. Clients would come to the institute, rather than instructors drumming up projects that are often created (or simplified) for students. Clients would pay for the software and use it, meaning that the product would actually have to work and be usable by real people. Students would work with researchers and teachers -- who should be the same people! -- in a model more like apprenticeship than anything our typical courses can offer.

      The Software Engineering Institute at Carnegie Mellon may have some programs that work like this, but it's an idea that is broader than the SEI's view of software engineering, one that could put our CS programs in much closer touch with the world of software than many are right now.

    There seems to be a wealth of revolutionary projects swirling in the computer science world these days: test-driven development, agile software methods, Croquet and eToys .... That's not all that unusual, but perhaps unique to our time is the confluence of so many of these movements in the notion of simplicity, of pulling back from abstraction toward the more concrete expression of computational ideas. This more general trend is perhaps a clue for us all, and especially for educators. One irony of this "back to simplicity" trend is that it is predicated on increasingly complex tools such as Eclipse and Croquet, tools that manage complexity for us so that we can focus our limited powers on the things that matter most to us.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    December 22, 2004 10:19 AM

    Why We Choke Under Pressure

    In two recent articles (here and here), I participated in a conversation with John Mitchell and Erik Meade about the role of speed in developing software development skills. While the three of us agreed and disagreed in some measure, we all seemed to agree that "getting faster" is a valuable part of the early learning process.

    Now comes an article from cognitive psychologists that may help us understand better the role of pressure. My old cognitive psychology professor, Tom Carr, and one of his students, Sian Bielock, have written an article, "Why High Powered People Fail: Working Memory and 'Choking Under Pressure' in Math", to appear in the journal Psychological Science. This article reports that strong students may respond worse under the pressure of hard exams than less gifted students. This form of choking seems to result from pressure-induced impairment of working memory, which is one of the primary advantages that stronger students have over others. You can read a short article from the NY Times on the study, or a pre-print of the journal article for full details.

    The new study is a continuation of Beilock's doctoral research, which seems to have drawn a lot of interest in the sports psychology world. An earlier study by Beilock and Carr, On the Fragility of Skilled Performance: What Governs Choking Under Pressure? from the Journal of Experimental Psychology: General supports the popular notion that conscious attention to internalized skills hurts performance. Apparently, putting pressure on a strong student increases their self-consciousness about their performance, thus leading to what we armchair quarterbacks call choking.

    I'm going to study these articles to see what they might teach me about how not to evaluate students. I am somewhat notorious among my students for having difficult tests, in both content and length. I've always thought that the challenge of a tough exam was a good way for students to find out how far they've advanced their understanding -- especially the stronger students. But I s I need to be careful that my exams not become tests primarily of handling pressure well and only secondarily about understanding course content.

    By the way, Tom Carr was one of my all-time favorite profs. I enjoyed his graduate course in cognitive psychology as much as any course I ever took. He taught me a lot about the human mind, but more importantly about how a scientist goes about the business of trying to understand it better. I still have the short papers I wrote for that course on disk (the virtues of plaintext and nroff!). I was glad to see some of his new work.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    December 20, 2004 4:14 PM

    Computational Complexity and Academic Awareness

    From Tall, Dark, and Mysterious' professional self-assessment, here's a wonderful nugget:

    My Master's thesis was in many ways the serendipitous culmination of three years of near-paralyzing apathy on my part for the academic path I had chosen for myself: a Maple program that may have the worst runtime of any program ever written (O(4^n*n^8n) - it crashed at n=5), thirty-five pages of painstakingly-formatted LaTeX, and a competent but tepid distillation of a subject that fully twenty people in the world give half a crap about.

    For you students of algorithms: O(4n*n8n). Of course, that simplifies to O(nn), but still -- ouch! That's some serious computational complexity, my friends.

    For you graduate students: Don't be too disturbed. I think that many people feel this way about their research when they get done, especially those in the more abstract disciplines. Research in mathematics and theoretical CS are especially prone to such sentiments. This feeling says as much about how academia works and the state of knowledge these days as it does about any particular research. They key is to sustain your passion for your research area in the face of the external demands your advisor and institutions place on you. Tall, Dark, and Mysterious seems to have lost hers long before completing.

    I know the intellectual fraud feeling she describes, too. During the semester, classes and conference travel and conference committees and administrative duties combine to leave me much less time to do computer science than I want. But the feeling passes when I remind myself that I'm doing CS in the small all the time; it's just the sustained periods of doing CS that I miss out on. That's what makes summers and those too-infrequent research leaves so valuable.

    If you like to think about teaching, especially at the university level, you should more of Tall, Dark, and Mysterious. She's a reflective teacher and an engaging writer. Her recent blog I like most of my students tickled my fancy. I only wish I'd written such a post first myself. That may be a great topic for grid blogging among university profs...


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    December 20, 2004 10:47 AM

    Improving the Vocabulary of Educators through Patterns

    Education blogger Jenny D. calls for a language of the practice of educators:

    We have to start naming the things we do, with real names that mean the same thing to all practitioners.

    ...

    My intent in what I do is to move the practice of teaching into an environment that resembles medicine. Where practices and procedures are shared, and the language to discuss them is also shared. Where all practices and procedures can and will be tailored to meet specific situations, even though practitioners work hard to maintain high standards and transparency in what they do.

    This sounds kind of silly, I know. But what we've got know is a hodgepodge of ideas and practices, teachers working in isolation in individual classrooms, and very little way to begin to straighten out and share the best practices and language to describe it.

    This doesn't sound silly to me. When I first started as a university professor, I was surprised the extent to which educators tend to work solo. There's plenty of technical vocabulary for the content of my teaching but little or no vocabulary for the teaching itself. The education establishment seems quite content that educators, their classrooms, and their students are all so different that we cannot develop a standard way to describe or develop what we do in a classroom. I think that much of this reluctance stems from the history of teacher education, but I also suspect that now there is an element of political agenda involved.

    Part of what I've done in my years as a faculty member is to work with other to develop the vocabulary we need to talk about teaching computer science. On the technical side, I have long been interested in documenting the elementary patterns that novices must learn to become effective programmers. Elementary patterns give students a vocabulary for talking about the programs they are writing, and a process for attacking problems. But they also give instructors a vocabulary for talking about the content of their courses.

    On the teaching side, I have been involved in the Pedagogical Patterns community, which aims to draw on the patterns ideas of architect Christopher Alexander to document best practices in the practice of teaching and learning. Patterns such as Active Student, Spiral, and Differentiated Feedback capture those nuggets of practice that good teachers use all the time, the approaches that separate effective instruction from hit-or-miss time in the classroom. Relating these patterns in a pattern language builds a rich context around the patterns and gives instructors a way to think about designing their instruction and implementing their ideas.

    Pedagogical patterns are the sort of thing that Jenny D. is looking for in educational proctice more generally.

    In a follow-up post, she extends her analysis of how professionals in other disciplines practice by applying skills to implement protocols. You'll notice in these blog entries the good use of metaphor I blogged about last time. Jenny uses the practices in other professions to raise the question, "Why don't educators have a vocabulary for discussing what they do?" She hasn't suggested that the vocabulary of educators should be like medicine's vocabulary or any other, at least not yet.

    I wish her luck in convincing other educators of the critical value of her research program. Every teacher and every student can benefit from laying a more scientific foundation for the everyday practice of education.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    December 17, 2004 1:12 PM

    When Blogs Do More Than Steal Time

    I often read about how blogging will change the world in some significant way. For example, some folks claim that blogs will revolutionize journalism, creating an alternative medium that empowers the masses and devalues the money-driven media through which most of the world sees its world. Certainly, the blogosphere offers a remarkable level of distributivity and immediacy to feedback, see the Chronicle of Higher Education's Scholars Who Blog, which chronicles this phenomenon.

    As I mentioned last time, I'm not often a great judge of the effect a new idea like blogging will have on the future. I'm skeptical of claims of revolutionary effect, if only because I respect the Power Law. But occasionally I get a glimpse of how blogging is changing the world in small ways, and I have a sense that something non-trivial is going on.

    I had one such glimpse this morning, when I took to reading a blog written by one of my student's blog. First of all, that seems like a big change in the academic order: a student publishes his thoughts on a regular basis, and his professors can read. Chuck's blog is a mostly personal take on life, but he is the kind of guy who experiences his academic life deeply, too, so academics show up on occasion. It's good for teachers to be reminded that students can and sometimes do think deeply about what they do in class.

    Second change: apparently he reads my blog, too. Students read plenty of formal material that their instructors write, but blogs open a new door on the instructor's mind. My blog isn't one of those confessional, LiveJournal-style diaries, but I do blog less formally and about less formal thoughts than I ordinarily write in academic material. Besides, a student reading my blog gets to see that I have interests beyond computer science, and even a little whimsy. It's good for students be reminded occasionally that teachers are people, too.

    Third, and this is what struck me most forcefully while reading this morning, these blogs make possible a new channel of learning for both students and teachers. Chuck blogged at some length about a program that he wrote for a data structures assignment. In the course thinking through the merits of his implementation relative to another student's, he had an epiphany about how to write more efficient multi-way selection statements -- and "noticed that no one is trying particularly hard to teach me" about writing efficient code.

    This sort of discovery happens rarely enough for students, and when it does happen it's likely to evanesce for lack of opportunity to take root in a conversation. Yet here I am privy to this discovery, six weeks after it happened. It would have been nice to talk about it when it happened, but I wasn't there. But through the blog I was able to respond to some of the points in the entry by e-mail. That I can have this peek into a student's mind (in this case, my own) and maybe carry on a conversation about an idea of importance to both of us -- that is a remarkable consequence of the blogosphere.

    I'm old enough to remember when Usenet newsgroups were the place to be. Indeed, I have a token prize from our Linux users group commemorating my claim to the oldest Google-archived Usenet post among our local crew. New communities, academic and personal, grew up in the Usenet news culture. (I still participate in an e-mail community spun off from rec.sport.basketball, and we gather once a year in person to watch NCAA tourrnament games.) So the ability of the Internet to support community building long predates the blog. But the culture of blogging -- personal, frequent posts sharing ideas on any topic; comments and trackbacks; the weaving of individual writers into a complex network of publication -- adds something new. And those personal reflections sometimes evolve into something more over the course of an entry, as in Chuck's programming reflection example.

    I do hope that there isn't some sort of Heisenberg thing going on here, though. I'd hate to think that students would be less willing to write honestly if they know their professors might be reading. (Feeling some pressure to write fairly and thoughtfully is okay. The world doesn't need any more whiny ranting blogs.) I know that, when I blog, at the back of my mind is the thought that my students might read what I'm about to say. So far, I haven't exercised any prior restraint on myself, at least any more than any thoughtful writer must exercise. But students are in a different position in the power structure than I am, so they may feel differently.

    Some people may worry about the fact that blogs lower or erase barriers of formality between students and professors, but I think they can help us get back to the sort of education that a university should offer -- a Church of Reason, to quote Robert Pirsig:

    [The real University is] a state of mind which is regenerated throughout the centuries by a body of people who traditionally carry the title of professor, but even that title is not part of the real University. The real University is nothing less than the continuing body of reason itself.

    The university is to be a place where ideas are created, evaluated, and applied, a place of dialogue among students and teachers. If the blogosphere becomes a place where such dialogue can occur with less friction -- and where others outside the walls of the church building itself can also join in the conversation, then the blogosphere may become a very powerful medium in our world after all. Maybe even revolutionary.


    Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

    December 15, 2004 7:29 AM

    The Role of Risk in Learning

    Brian Marick's latest post discusses the latest developments in his ongoing exploration of agile methods and the philosophy of science. As always, there's plenty of food for thought there. In particular, he links to an article on behavioral economics that reminds us that people aren't rational in the classical sense.

    Most people are more strongly affected in their decision-making by vivid examples than by abstract information, no matter how much more accurate the abstract information is.

    This reminds me of my previous post. Most often, a good example will help students grok an idea better than any abstractions I give them. The abstractions will work better for them after they have a strong foundation of experiences and examples.

    For most people, the possibility of a loss greatly outweighs the chance of a win. "People really discriminate sharply between gaining and losing and they don't like losing." ....

    I think this principle accounts for why students will work from first principles to solve a problem rather than use a more abstract idea they aren't yet comfortable with. They perceive that the risk of failure is smaller by working from small ideas they understand than by working from a bigger idea they don't. I need to think more about risk when I teach.

    For most people, first impressions play a remarkably strong role in shaping subsequent judgments.

    This reminds me not only of my most recent post but even more so of something Alan Kay said in his OOPSLA Educators Symposium keynote:

    Kay reminded us that what students learn first will have a huge effect on what they think, on how they think about computing. He likened the effect to that of duck imprinting, the process in which ducklings latch onto whatever object interacts with them in the first two days of their lives -- even if the object is a human. Teaching university students as we do today, we imprint in them that computing is about arcane syntax, data types, tricky little algorithms, and endless hours spent in front of a text editor and compiler. It's a wonder that anyone wants to learn computing.

    The first three quotes above are drawn from the work of psychologist Daniel Kahneman, whose work I first encountered in one of my favorite grad school courses, Cognitive Psychology. Kahneman won a Nobel Prize in Economics for his work with colleague Amos Tversky that showed how humans reason in the face of uncertainty and risk. This work has tremendous implications for the field of artificial intelligence, where my first computing passions resided, but also for how we teach.

    Busy, busy, busy.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    December 14, 2004 9:16 AM

    Programming a lá Hoffman or Olivier

    As I write final exams and begin to think ahead to next semester, I've been thinking about how I teach programming and software development. Sometimes, I get so busy with all of the extras that can make programming interesting and challenging and more "real-world" -- objects and design and refactoring and GUIs and unit tests and frameworks and ... -- that I lose sight of the fact that my students are just trying to learn to write programs. When the abstractions and extra practices get in the way of learning, they have become counterproductive.

    I'd like to streamline my approach to programming courses a bit. First, I'll make some choices about which extras are more distraction than contribution, and eliminate them. Second, I'll make a conscious effort to introduce abstractions when students can best appreciate them: after having concrete experience with problems and techniques for solving them.

    My colleagues and students need not fear that I am going back to the Dark Ages with my teaching style. My friend Stuart Reges (more information at his old web page) isn't going quite that far, but he is in the process of redesigning his introductory courses on the model of his mid-1980s approach. He seems to be motivated by similar feelings, that many of the ideas we've added to our intro courses in the last 20 years have gotten in the way of teaching programming. Where Stuart and I differ is that I don't think there is anything more "fundamental" about what we did in Pascal in the '80s than what we should do with objects and messages in, say, Java. The vocabulary and mindset are simply different. We just haven't quite figured out the best way to teach programming in the new mindset.

    I wish Stuart well in his course design and hope to learn again from what he does. But I want to find the right balance between the old stuff -- what my colleagues call "just writing programs" -- and the important new ideas that can make us better programmers. For me, the first step is a renewed commitment to having students write programs to solve problems before having them talking about writing programs.

    This train of thought was set off by a quote I read over at 43 Folders. The article is about "hacking your way out of writer's block" but, as with much advice about writing, it applies at some level to programming. After offering a few gimmicks, the writer says:

    On the other hand, remember Laurence Olivier.

    One day on the set of Marathon Man, Dustin Hoffman showed up looking like shit. Totally exhausted and practically delirious. Asked what the problem was, Hoffman said that at this point in the movie, his character will have been awake for 24 hours, so he wanted to make sure that he had been, too. Laurence Olivier shook his head and said, "Oh, Dusty, why don't you just try acting?"

    I don't want all the extras that I care about -- and that I want students to care about -- to be affectations. I don't want to be Dustin Hoffman to Stuart's Olivier. Actually that's not quite true... I admire Hoffman's work, have always thought him to be among the handful of best actors in Hollywood. I just don't want my ideas to become merely affectations, to be distractions to students as they learn the challenging and wonderful skill of programming. If I am to be a method teacher, I want the method to contribute to the goal. Ultimately, I want to be as comfortable in what I'm doing in the classroom as Hoffman is with his approach to acting. (Though who could blame him if he felt a little less cocksure when chided by the great Olivier that day?)

    The bottom line for now is that there are times when my students and I should just write programs. Programs that solve real problems. Programs that work. We can worry about the abstract stuff after.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    December 09, 2004 5:05 PM

    Learning from Failure

    I have a couple of things on my blogging radar, but this quote from Nat Pryce just launched a new train of thought. Maybe that's because I gave my last lecture of the semester an hour and a half ago (hurray!) and am in an introspective mood about teaching now that the pressure is off for a while.

    One of the many interesting topics that Dave [Snowden] touched on [in his talks at XP Day 4 was how people naturally turn to raw, personal stories rather than collated research when they want to learn how to solve a problem. Furthermore, people prefer negative stories that show what to avoid, rather than positive stories that describe best or good practices.

    This "struck a chord" with Nat, whose blog is called Mistaeks I Hav Made [sic]. It also struck a chord with me because I spend a lot of my time trying to help students learn to design programs and algorithms.

    When I teach patterns, whether of design or code, I often try to motivate the pattern with an example that helps students see why the pattern matters. The decorator pattern is cool and all, but until you feel what not using it is like in some situations, it's sometimes hard to see the point. I think of this teaching approach as "failure driven", as the state of mind for learning the new idea arises out of the failure of using the tools already available. It's especially useful for teaching patterns, whose descriptions typically include the forces that make the pattern a Good Thing in some context. Even when the pattern description doesn't include a motivating example, the forces give you clues on how to design one.

    I taught the standard junior/senior-level algorithms course for the first time in a long, long time this semester, and I tried to bring a patterns perspective to algorithm design. This was helped in no small part by discussions with David Ginat, who is interested in documenting algorithm patterns. (One of my favorites of his is the Sliding Delta pattern.) But I think some of most effective work in the algorithms class semester -- and also some of the most unstructured -- was done by giving the students a game to play or a puzzle to solve. After they tried to figure out the nugget at the heart of the winning strategy, we'd discuss. We had lots of failures and ultimately, I think, some success at seeing invariants and finding efficient solutions. The students were sometimes bummed by their inability to see solutions immediately, but I assured them that the experience of trying and failing was what would give rise to the ability to solve problems.

    (I still have more work to do finding the patterns in the algorithms we designed this semester, and then writing them up. Busy, busy!)

    Negative stories -- especially the student's own stories -- do seem to be the primary catalyst for learning. But I have to be careful to throw in some opportunities to create positive stories, too, or students can become discouraged. The "after" picture of the pattern isn't enough; they need to phase a new problem and feel good because they can solve it. I suppose that one of the big challengers we teachers face is striking the right balance between the failures that drive learners forward and the successes that keep them wanting to be on the road.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    December 08, 2004 10:12 AM

    You Can't Cram for Your Running Final

    We are in the last week of classes here. For the last couple of weeks, I've noticed a group of students who have started working out in the mornings. If they are working out as a part of a lifestyle change, I applaud them! However, from the timing of the new workouts, I suspect that these folks are trying to get ready for an upcoming final exam in their physical fitness classes.

    Folks, here's a small hint from someone who's been there: You can't cram for physical fitness. Your bodies don't work that way. Slow and steady win this race.

    Our brains don't work that way, either, though in this season of cramming for final exams you wouldn't think that anyone knows so. Sadly, for many academic courses, it seems to work in the short-term. If a final exam emphasizes facts and definitions, then you may be able to study them all in a massive all-nighter and remember them long enough to get through the exam. But the best-case scenario is that you do well enough on the exam, only in a year or so to find that you have not mastered any of the ideas or skills from the course. For a CS professor, there are fewer things sadder than encountering a senior, about to graduate, who did well enough in all his courses but who can't seem to program or drive at a command line.

    Learning, like training, is an enterprise best done over time. Slow and steady wins the race.


    Posted by Eugene Wallingford | Permalink | Categories: Running, Teaching and Learning

    December 07, 2004 3:57 PM

    An Agile Lesson from David Parnas

    Yesterday, someone reminded me of my friend Rich Pattis's Quotations for Learning Programming. I jumped, randomly as near as I can tell, to the Os and had not scanned too far down the page when I saw this quote:

    As a rule, software systems do not work well until they have been used, and have failed repeatedly, in real applications.

    - D. Parnas

    Of course, David Parnas is a famous computer scientist, well known for his work on modularity and software design. Many give him credit for best explaining encapsulation as a design technique. He is revered as an icon of traditional software engineering.

    Yet, when I read this quote, I could help but think, "Maybe Parnas is a closet agile developer." :-) Frequent readers may recall that I do this occasionally... See my discussion of Dijkstra as a test-driven developer.

    Whether Parnas is sympathetic to the methodological side of agile software development or not, this quote points out a truth of software development that is central to agile approaches: There is benefit in getting to working code sooner. The motive usually given for this is so that developers can learn about the problem and the domain from writing the code. But perhaps a secondary value is that it can begin to fail sooner -- but in the lab, under test-driven conditions, where developers can learn from those *failures* and begin to fix them sooner.

    I have to be careful not to misinterpret a person's position by projecting my own views onto a single quote taken out of context. For example, I can surely find a Parnas quote that could easily be interpreted as condemning agile software methodologies (in particular, I'm guessing, their emphasis on not doing big design up front).

    I don't worry about such contradictions; I prefer to use them as learning opportunities. When I see a quote like the one above, I like to take a moment to think... Under what conditions can this be right? When does it fail, or not hold? What can I learn from this idea that will make me a better programmer or developer? I don't think I'd ever consciously considered the idea that continuous feedback in XP might be consist of the same sort of failure that occurs when we deploy a live system. Parnas quote has me thinking about it now, and I think that I may learn something as a result.

    More subconsciously agile quotes?

    As I scanned Rich's list in my agile state of mind, a few others caught my eye...

    • Blaise Pascal, proponent of refactoring?

      I have made this letter longer than usual, only because I have not had the time to make it shorter.

    • An anonymous cautionary tale about agile methods -- but also about traditional software engineering?

      The fast approach to software development: Ready, fire, aim (the fast approach to software development).
      The slow approach to software development: Ready, aim, aim, aim, aim ...

    • Maybe those guys in Redmond know something about agility after all...

      Microsoft, where quality is job 1.1.

    • Finally, this quote has long been a favorite of mine. I have it hanging on the wall in my home office:

      We are what we repeatedly do.
      Excellence, then, is not an act, but a habit.

      - Aristotle

    This quote isn't a statement about agile practice, or even more generally about pragmatic practice. It is a hallmark of the reflective professional. It's also a pretty good guide for living life.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    December 07, 2004 6:24 AM

    More on Software Speed Training

    Last evening, I commented on the idea of speed training for software developers, raised in Erik Meade's blog. John Mitchell also commented on this idea. Check out what John has to say. I think he makes a useful distinction between pace and rhythm. You'll hear lots of folks these days talk about rhythm in software development; much of the value of test-driven development and refactoring lie in the productive rhythm they support. John points out that speed work isn't a good idea for developers, because that sort of stress doesn't work in the same way that physical stress works on the muscles. He leaves open the value of intensity in learning situations, more like speed play, which I think is where the idea of software fartleks can be most valuable.

    Be sure to check out the comments to John's article, too. There, he and Erik hash out the differences between their positions. Seeing that comment thread make me again want to add comments to my blog again!


    Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

    December 06, 2004 6:57 PM

    Speed Training for Software Developers

    In making analogies between software development and running, I've occasionally commented on sustainable pace, the notion from XP that teams should work at a pace they can sustain over time rather than at breakneck paces that lead to bad software and burn-out. In one entry, I discuss the value of continuous feedback in monitoring pace. In another, I describe what can happen when one doesn't maintain a sustainable pace, in the short tem and over the longer term.

    Not unexpectedly, I'm not alone in this analogy. Erik Meade recently blogged on sustainable pace and business practice. I was initially drawn to his article by its title reference to increasing your sustainable pace via the fartlek. While I liked his use of the analogy to comment on standard business practice, I was surprised that he didn't delve deeper into the title's idea.

    Fartlek is Swedish for "speed play" and refers to an unstructured way for increasing one's speed while running: occasionally speed up and run faster for a while, then slow down and recover. We can contrast this approach to more structured speed work-outs such as Yasso 800s, which specify speeds and durations for fast intervals and recoveries. In a fartlek, one simply has fun speeding up and slowing down. This works especially well when working out with a friend or friends, because partners can take turns choosing distances and speeds and rewards. In the case of both fartleks and structured interval training, though, the idea is the same: By running faster, you can train your body to run faster better.

    Can this work for software development? Can we train ourselves to develop software faster better?

    It is certainly the case that we can learn to work faster with practice when at the stage of internalizing knowledge. I encourage students to work on their speed in in-class exercises, as a way to prepare for the time constraints of exams. If you are in the habit of working leisurely on every programming task you face, then an exam of ten problems in seventy-five minutes can seem like a marathon. By practicing -- solving lots of problems, and trying to solve them quickly -- students can improve their speed. This works because the practice helps them to internalize facts and skills. You don't want forever to be in the position of having to look up in the Java documentation whether Vectors respond to length() or size().

    I sometimes wonder whether working faster actually helps students get faster or not, but even if it doesn't I am certain that it helps them assess how well they've internalized basic facts and standard tasks.

    But fartleks for a software development team? Again, working on speed may well help teams that are at the beginning of their learning curves: learning to pair program, learning to program test-first, learning to use JUnit, ... All benefit from lots of practice, and I do believe that trying to work efficiently, rather than lollygagging as if time were free, is a great way to internalize knowledge and practice. I see the results in the teams that make up my agile software development course this semester. The teams that worked with the intention of getting better, of attempting to master agile practices in good faith, became more skilled developers. The teams that treated project assignments as mostly a hurdle to surmount still struggle with tools and practices. But how much could speed work have helped them?

    The bigger question in my mind involves mature development teams. Will occasional speed workouts, whether from deadline pressure on live jobs or on contrived exercises in the studio, help a team perform faster the next time they face time pressure? I'm unsure. I'd love to hear what you think.

    If it does work, we agile developers have a not-so-secret advantage... Pair programming is like always training with a friend!

    When a runner prefers to run alone rather than with others, she can still do a structured work-out (your stopwatch is your friend) or even run fartleks. Running alone leaves the whole motivational burden on the solo runner's shoulders, but the self-motivated can succeed. I have run nearly every training run the last two years alone, more out of circumstance than preference. (I run early in the morning and, at least when beginning, was unsuitable as a training partner for just about any runner I knew.) I can remember only two group runs: a 7-miler with two faster friends about two weeks before my first marathon last fall, and an 8-mile track workout the day before Thanksgiving two weeks ago. As for the latter, I *know* that I trained better and harder with a friend at my side, because I lacked the mental drive to go all out alone that morning.

    Now I'm wondering about how pair programming might play this motivational role sometimes when writing software. But that blog entry will have to wait until another day.


    Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

    November 30, 2004 5:48 PM

    Milestones and Nostalgia

    November 2004 will enter the books as my least-blogged month since starting Knowing and Doing back in July. I'm not all that surprised, given that:

    • it is the last full month of my semester,
    • it follows a month in which I was on the road every weekend and for a full week at OOPSLA, and
    • it contains a 5-day week for Thanksgiving.
    Each of these would cut into my time for blogging on its own, and together they are a death knell to any free moments.

    When I started Knowing and Doing, I knew that regular blogging would require strict discipline and a little luck. Nothing much has changed since then. I still find the act of writing these articles of great value to me personally, whether anyone reads them or not. That some folks do find them useful has been both gratifying and educational for me. I learn a lot from the e-mails I receive from Faithful Readers.

    I reached the 100-post plateau ten days ago with this puff piece. I hope that most of my posts are of more valuable than that! In any case, reaching 1000 entries will be a real accomplishment. At my current pace, that will take me three more years...

    While on this nostalgic kick, I offer these links as some obligatory content for you. Both harken back to yesteryear:

    • Brian Foote waxes poetic in his inimitable way about watching undergrads partake in the rite of initiation that is a major operating systems project. He points out that one thing has changed this experience significantly since he and I went through it thirty and (can it be?) twenty years ago, respectively: unit tests.

    • John O'Conner has a nice little piece on generalizing the lesson we all learned -- some of us long ago, it seems -- about magic numbers in our code. Now if only we turned all those constants into methods a lá the Default Value Method pattern...


    Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

    November 29, 2004 4:46 PM

    Accepting the Blame

    On Saturday, Jason Yip started a conversation about the danger of accepting blame reflexively. He fears that accepting the blame preemptively "reinforc[es] the mistaken belief that having someone take blame is somehow important." Fixing the problem is what matters.

    This is, of course, true. As Jerry Weinberg says in The Secrets of Consulting:

    The chances of solving a problem decline the closer you get to finding out who was the cause of the problem.

    (You may recall that some of my earliest blog entries here and here referred to this gem of a book.)

    When people focus on whose fault it is, they tend to become defensive, to guard their egos, even at the expense of fixing the problem. This is a very human response, and one that is hard to control intellectually. The result is both that the problem remains and its systematic cause remains, pretty much guaranteeing more problems in the future. If we can depersonalize the problem, focusing on what is wrong and how we can fix it, then there is hope of both solving the problem and learning how not to cause similar problems in the future.

    I think Jason is right on that people can use "It's my fault" as a way to avoid discussion and thus be as much of an obstacle to growth and improvement as focusing on placing blame. And someone who *always* says this is either trying to avoid confrontation or making way too many mistakes. :-)

    But as one commenter posted on his site, saying "I made a mistake. Let's fix it, and let's find a way to avoid such problems in the future" is a welcome behavour, one that can help individuals and teams grow. The real problem is saying "I made a mistake." when you didn't make a mistake or don't want to discuss what you did.

    I have been fortunate never to have worked at a place where people played The Blame Game, at least not too destructively. Walking on eggshells all day is no way to live one's life, and not the sort of environment in which a person can learn.

    These days, I don't have much trouble in professional settings acknowledging my mistakes, though I probably wasn't always as easy-going. I give a lot of credit for my growth in this regard to having papers I've written workshopped at PLoP conferences. At PLoP, I learned how better to separate my self from my product. The practices of writers workshops aim to create a safe environment, where authors can learn about how well their papers work. As much as the practices of the patterns community help, it probably takes going through the workshop process a few times to wear away the protective shell that most of use use to protect ourselves from criticism. Being with the right sort of people helps.

    All that said, I had to chuckle at It's Chet's Fault, a fun little community practice of the famed C3 project that gave rise to XP as a phenomenon. On a team of folks working together to get better, I think that this sort of levity can help people remember to keep their eyes on what really matters. On the other hand, lots of things can work with a team of folks striving to get better. Being with the right sort of people helps.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    November 19, 2004 10:10 AM

    Of Roosters and Running

    From Art and Fear:

    In talking about how hard artists work, I am reminded of the story about the man who asked a Chinese artist to draw a rooster for him and was told to come back in a month. When he returned, the artist drew a fabulous rooster in a few minutes. The man objected to paying a large sum of money for something that took so little time, whereupon the artist opened the door to the next room in which there were hundreds of drawings of roosters.

    Sometimes folks, non-runners and runners alike, comment on how they can't run as far or as fast (hah!) as I do. There are days when I wish that I could open a door to let the person see a room full of runs like the one I had this morning: hard work, pushing at my limits, finishing with nothing left, but still short of the goal I've been working toward.

    Folks who make this sort of comment almost always mean well, intending a compliment. Often, though, I think that the unstated implication is that they couldn't do what I do even if they tried, that runners have some innate ability that sets them apart from everyone else. Now, I don't doubt at all that some people have innate physical abilities that give them some advantage at running. The very best -- the guys who run 9.9s in the 100m, or 2:10 marathons -- almost certainly have gifts. But I am not a gifted runner, other than having the good fortune to not injure easily or get sick very often.

    And let's not forget how hard those 9.9s sprinters and 2:10 marathoners have to work in order to reach and maintain their level of excellence.

    Richard Gabriel often says that "talent determines only how fast you get good, not how good you get". Good poems, good art, and good runs are made by ordinary people. Art and Fear says this: "Even talent is rarely distinguishable, over the long run, from perseverance and lots of hard work."

    That is good news for runners like me. Maybe, if I keep working, I'll reach my speed goal next week, or the week after that.

    It's also good news for programmers like me.

    Computer science students should take this idea to heart, especially when they are struggling. Just keep working at it. Write programs every day. Learn a new something every day. You'll get there.


    Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

    November 17, 2004 6:55 PM

    Balancing Confidence and Challenge, Anxiety and Boredom

    Just something rolling around my mind today...

    During my daughter's violin lesson earlier this afternoon, I was browsing a pamphlet on the Suzuki method for learning music titled "The Power of Simplicity". I've forgotten the author's name already... The purpose of the pamphlet is to support the ideas that make up the Suzuki method with references to research in psychology and education, as well as writings from philosophers and master musicians.

    I didn't get very far in thirty minutes, but the first chapter jogged my mind with its discussion of the Suzuki repertoire's design. For those of you who are unfamiliar, the technical idea of the Suzuki approach is a step-by-step mastery of specific skills via the mastery of a carefully arranged progression of musical pieces, some written specifically for the curriculum. As faithful readers know, practice and mastery have been on my mind lately. But the thing that grabbed my attention today was a reference to "the problem of the match", a phrase from educational psychology that refers to matching the elements of a curriculum to the needs of the learner. The author wasn't so much concerned with the technical details of the match (say, fingerings or chords) as with the learner's experience -- the balance between the learner's confidence and the challenge of the current piece.

    This brought to mind something Alan Kay talked about at OOPSLA, in reference to designing learning environments for children. Kay said that we should consciously seek to widen the path of flow for learners, between anxiety and boredom. My experience with the Suzuki piano literature is that it does a remarkable job of balancing confidence with challenge, of navigating between anxiety and boredom. It does so in many ways: by introducing only or two new skill elements at a time, by repeating skill elements in subsequent pieces for mastery through repetition, by occasionally dropping an "easy" piece into the curriculum to let the learner bask in confidence for a while, and so on.

    Many people, including Suzuki, Montessori, and Kay, have pointed out that this idea is essential when supporting children as learners. But they are just as important when working with more mature learners, including college students. Some computer science educators have written about Bloom's taxonomy as it applies to CS 1, and I've heard colleagues make arguments about what is and isn't appropriate for first-year courses based on the abstraction capabilities of typical 18- and 19-year-olds. Too often, though, our courses follow a path accreted over many years of programming language changes, textbook revisions, and little additions (and few deletions!) to our lecture notes.

    I've grown increasingly dissatisfied with my Computer Science II curriculum over the last few years. I can see now that one source of my dissatisfaction is the mismatch between what I ask students to do and what they know when I ask them to do it. Balancing anxiety against challenge is hard. My tendency is toward challenge, which often results in high anxiety and low confidence.

    I've made small changes to the course every semester, and a year or so ago I refactored the course a bit more substantially. But course offerings last for a semester, and with iterations that long refactoring works at a glacial rate. Besides, several Big Ideas have been taking root in my mind lately, and I want to redesign the course from the ground up to incorporate them. Given that opportunity, I want to design the course -- the concepts we cover, their order, my examples, my programming assignments, the whole bit -- taking my students' "flow" into account.

    Talk about a task that gives rise to anxiety. One way that I hope to alleviate my own fear is to develop the course in an agile way. But I want to begin the course with plenty of raw material that will allow me to respond to what I learn nimbly.

    I am glad to have stumbled across that humble little Suzuki pamphlet today, and to have been reminded of Alan Kay's discussion of flow. You never know where a little reading might lead your mind...


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    November 13, 2004 9:03 AM

    Agile Courses

    Continuing with our theme of trying new things in the classroom, I smiled when I read Kevin Rutherford's story about giving a lecture on agile software development as an agile presentation project. I had considered a similar idea while planning for my agile software development course: periodically stop the course and have the students help me set the direction for the next 'iteration'. Indeed, a couple of years ago, I outlined a paper that I wanted to write about applying XP practices to the teaching of a course: I could imagine a spike solution, and using the planning game to steer the course. What would it be like to teach 'test-first', with merciless refactoring? How would the other XP disciplines map onto how I teach, and how my students learn?

    As with many of my wilder ideas, I have not yet followed through. It's heartening to know that someone like Kevin has tried this idea out and found it workable on a smaller scale.

    Running a whole course in this way promises some interesting benefits and raises some potential problems. By allowing students to to help steer, it may help to keep them more engaged with the material they are learning. The most concrete benefit might be the periodic feedback the instructor receives. Even if only the professor drives the course, at least he would be able to do so with some knowledge of what the students think about what they have learned.

    One of the potential problems lies in the fact that students are exactly like the clients in an agile software project. First, students typically don't know all that much about the content of the courses they take. I've occasionally asked students questions at the beginning of a semester to elicit ideas for course direction, and I have found them relatively unaware. But why should they be? At the beginning of a course on algorithms, they have no reason to know all that much about what an algorithms course should or could be like, or even what kinds of algorithms there are.

    But this problem can be addressed in a reasonable way, and Kevin's article shows how: Start the course with a few pre-written story cards that form the basis of the first iteration. That way, the instructor can lay out some foundation material, present a broad view of the course material, and give the students some ideas about where the course can go. The key to doing the course in an agile way is to keep the number of pre-written cards to a minimum, so that you can get into client interaction as soon as possible. Selecting good starting stories would become an important teaching skill.

    The second problem is that students are not the only clients of the course. Many different people hold a stake in your courses, including:

    • other courses in the curriculum, especially those which your course is a prerequisite
    • other professors, who will work with your students
    • the profession that will employ them
    • society at large, to which your students will contribute
    (The parents who pay for their children's education may want to be on the list, too!) I teach at a state-supported university and so feel a strong obligation to uphold the public trust for the state of Iowa.

    This problem, too, can be addressed in a manner similar to the first problem. The instructor can inject a few essential stories into the course, spread across the several iterations that make up the semester. These stories can ensure that the course covers material essential for courses downstream and for other stakeholder expectations. Again, it is essential to keep the number of such stories to a minimum. Academics are notorious for thinking that everything is required material, fundamental to the students' futures. (See the state of CS 1 courses around the US...) One side effect of this style of course planning might be to force the instructor to make some tough choices about what really is essential, and then get out of the way and let students help drive. We all might be surprised by the results.

    Even with the instructor seeding each iteration with a story or two, the students would play a role in ordering material and choosing the direction of the course. If nothing else, this would give the students a sense of ownership and let them take more responsibility for their own learning.

    Of course, a lower-risk experiment with this idea would be to do more what Kevin did, using it to run a single lecture period or a week-long unit.

    This sounds like a lot of fun, and maybe a better way to run a course. Spring semester isn't too far away...


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    November 12, 2004 10:26 AM

    Fixin' What's Broke

    On Wednesday I blogged about helping students succeed more by cutting their projects in half. I haven't tried that yet, but I did take a small step in that spirit, toward helping students learn in a different way than usual.

    In my agile development course, we've been reading The Pragmatic Programmer, and recently discussed the idea that professionals are masters of their tools. I've been encouraging students in the course to broaden their skill sets, including both tools specific to the agile arena, such as JUnit, and tools more general to their professional needs, such as a command shell, an editor, and a couple of programming languages. But I find that most students (have to) focus so much on the content of their programming assignments that they don't attend much to their practices.

    So, I made their latest assignment be specifically to focus on skills. They will use a new tool (a testing framework other than JUnit) to work with a program they've already written and to write a very simple program. At the same time, they are to choose a particular skill they want to develop -- say, to learn emacs or to finally learn how to use a Linux command line -- and do it. With almost no content to the assignment, at least not new content, I hope that students will feel they have the time and permission to focus on developing their skills.

    I am reminded of one of my favorite piano books, Playing the Piano for Pleasure, by Charles Cooke (Simon and Schuster, 1941). Cooke was a writer by trade but an ardent amateur pianist, and he used his writing skills to share some of his secrets for studying and practicing piano. Among other things, Cooke suggested a technique in which he would choose his weakest part of a piece, what he calls a 'fracture', and then practice it with such intensity and repetition that it becomes one of his strongest. He likened this to a bone healing after a fracture, when the newly knitted bone exceeds the strength of the bone around it.

    I try to have this mindset when working on my professional skill set. And I'd certainly like my students to grow toward such a mentality as they develop into software professionals and happy adults.

    I hope that some of the students working on my new assignment will attack their own weakest areas and turn them into strengths or, at the least, grow to the point that they no longer have to fear the weakness. Overcoming even a little fear can help a student move a long way toward being a confident and successful programmer.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    November 10, 2004 10:46 AM

    An Instant Recipe for Success -- for Students and Professors?

    I liked Incipient Thought's recipe for project success yesterday.

    It starts with T. J. Watson's well-known:

    If you want to increase your success rate, double your failure rate.

    ... to which he adds a second part:

    If you want to double your failure rate, all you have to do is halve your project length.

    Then, winking, he points out that the second step may fail, because shorter projects may actually succeed more often!

    I think this is wonderful software development advice.

    As a teacher, I have been thinking about how to bring this idea to my courses and to my students.

    I always encourage my students to approach their course projects in this way: take lots of small steps doing the simplest thing possible, writing test code to verify success at each step, and refactoring frequently to improve their design. By taking small steps, they should feel confident that whatever they've done so far actually works. And, when it doesn't, they haven't gone too far astray, because they only worked on a little bit of specification. Debugging can often be localized to the most recently added code.

    Breaking a complex but unitary spec ("implement a program to play MasterMind") down into smaller stories is hard, especially for freshmen and sophomores. Even my upper division students sometimes have difficulty breaking requirements that are smaller but still too big into even smaller steps.

    In recent semesters, I've tried to help by writing my assignments as a list of small requirements or user stories. My second-semester students are currently implementing a simple text-based voice mail system, which I specified as a sequence of individual requirements. Sometimes, I even specify that we will grade assignments story-by-story and stop as soon as the program "fails" a story.

    This approach sounds great, but it is out of step with the student culture these days. Taking small steps, and especially refactoring, requires having a certain amount of time to go through the process, with some reflection at each step. Most students are inclined to start every project a few days or less before it's due, at which point they feel a need to "just do it". I've tried to encourage them not to postpone starting -- don't all professors face this problem? -- mostly to no avail.

    In my Agile Software Development course, we've been doing 2- and 3-week releases, and I think we've had some success with doing 3 week-long iterations, with full submission of the project, within each release. Even still, word in the class is that many folks start each iteration on a day or two or three before it's due.

    Maybe I should take the "recipe for project success" advice for them... I could assign two half-week assignments instead of a one-week assignment! Students would have twice as many opportunities to succeed, or to fail and learn.

    One of the downsides of this idea for me is grading. I don't like to grade and, while am usually (but not always!) timely in getting assignments back to students, I use enough mental energy grading for three courses as it is. I could automate more of the grading, using JUnit, say, to run tests against the code. But to the extent that I need to look at their code, this approach requires some new thinking about grading.

    One of the downsides of this idea for my students is the old time issue. With 15 hours of class and 30 hours at work and whatever time they spend in the rest of their lives, students have a tough time working steadily on each course throughout each week. Throw in exams and projects and presentations during special times in the semester, and the problem gets worse.

    But I can't shake the feeling that there is something deeply right about halving the size of every project. I may have to "just do it" myself and face whatever difficulties arise as they do.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    November 06, 2004 9:03 PM

    Alan Kay's Talks at OOPSLA

    Alan Kay gave two talks at OOPSLA last week, the keynote address at the Educators Symposium and, of course, his Turing Award lecture. The former was longer and included most of the material from the Turing lecture, especially when you consider his generous question-and-answer session afterwards, so I'll just comment on the talks as a single presentation. That works because they argued for a common thesis: introductions to computing should use simple yet powerful computational ideas that empower the learner explore the larger world of ideas.

    Kay opened by decrying the premature academization of computing. He pointed out that Alan Perlis coined the term "computer science" as a goal to pursue, not as a label for existing practice, and that the term "software engineering" came with a similar motivation. But CS quickly ossified into a discipline with practices and standards that limit intellectual and technical discovery.

    Computing as science is still an infant. Mathematicians work in a well-defined space, creating small proofs about infinite things. Computing doesn't work that way: our proofs are programs, and they are large, sometimes exceedingly so. Kay gave a delightful quote from Donald Knuth, circa 1977:

    Beware of bugs in the above code; I have only proved it correct, not tried it.

    The proof of our programs lies in what they do.

    As engineering, computing is similarly young. Kay contrasted the construction of the Great Pyramid by 200,000 workers over 20 years with the construction of the Empire State Building by fewer than 3,000 people in less than 11 months. He asserts that we are somewhere in the early Middle Ages on that metaphorical timeline. What should we be doing to advance? Well, consider that the builders of the Pyramid used hand tools and relatively weak ideas about building, whereas the engineers who made the Empire State Building used power tools and even more powerful ideas. So we should be creating better tools and ideas. (He then made a thinly veiled reference to VisualStudio.NET 2005, announced at OOPSLA, as just another set of hand tools.)

    So, as a science and professional discipline in its youth, what should computing be to people who learn it? The worst thing we can do is to teach computer science as if it already exists. We can afford to be humble. Teach students that much remains to be done, that they have to help us invent CS, that we expect them to do it!

    Kay reminded us that what students learn first will have a huge effect on what they think, on how they think about computing. He likened the effect to that of duck imprinting, the process in which ducklings latch onto whatever object interacts with them in the first two days of their lives -- even if the object is a human. Teaching university students as we do today, we imprint in them that computing is about arcane syntax, data types, tricky little algorithms, and endless hours spent in front of a text editor and compiler. It's a wonder that anyone wants to learn computing.

    So what can we do instead? I ran into some interesting ideas on this topic at OOPSLA even before the Educators' Symposium and had a few of my own during the week. I'll be blogging on this soon.

    Alan has an idea of the general direction in which we should aim, too. This direction requires new tools and curricula designed specifically to introduce novices to the beauty of computing.

    He held up as a model for us Frank Oppenheimer's Exploratorium, "a collage of over 650 science, art, and human perception exhibits." These "exhibits" aren't the dead sort we find in most museums, where passive patrons merely view an artifact, though. They are stations with projects and activities where children can come face to face with the first simple idea of science: The world is not always as it seems. Why are there so many different exhibits? Kay quoted Oppenheimer to the effect that, if only we can bring each student into contact with that one project or idea that speaks directly to her heart, then we will have succeeded.

    Notice that, with that many projects available, a teacher does not have to "assign" particular projects to students at a particular point in time. Students can choose to do something that motivates them. Kay likened this to reading the classics. He acknowledged that he has read most of the classics now, but he didn't read them in school when they were assigned to him. Then he read other stuff, if only because the he had chosen for himself. One advantage of students reading what they want is that a classroom will be filled with people who have read different things, which allows you to have an interesting conversation about ideas, rather than about "the book".

    What are the 650 examples or projects that we need to light a fire in every college student's heart? Every high schooler? Every elementary school child?

    Kay went on to say that we should not teach dumbed-down versions of what experts know. That material is unnecessarily abstract, refined, and distant from human experience. Our goal shouldn't be to train a future professional computer scientist (whatever that is!) anyway. Those folks will follow naturally from a population that has a deep literacy in the ideas of science and math, computing and communication.

    Here, he pointed to the typical first course in the English department at most universities. They do not set out to create professional writers or even professional readers. Instead, they focus on "big ideas" and how we can represent and think about them using language. Alan thinks introductory computer science should be like this, too, about big ideas and how to represent and think about them in language. Instead, our courses are "driver's ed", with no big ideas and no excitement. They are simply a bastardization of academic computer science.

    What are the big ideas we should introduce to students? What should we teach them about language in order that they might represent ideas, think about them, and even have ideas of their own?

    Alan spent quite a few minutes talking about his first big inspiration in the world of computing. Ivan Sutherland's Sketchpad. It was in Sketchpad that Kay first realized that computing was fundamentally a dynamic medium for expressing new ideas. He hailed Sutherland's work "the greatest Ph.D. dissertation in computer science of all time", and delighted in pointing out Sketchpad's two-handed user interface ("the way all UIs should be"). In an oft-told but deservedly repeated anecdote, Kay related how he once asked Sutherland how he could have created so many new things -- the first raster graphics system, the first object-oriented system, the first drawing program, and more -- in a single year, working alone, in machine language. Sutherland replied, "... because I didn't know it was hard".

    One lesson I take from this example is that we should take care in what we show students while they are learning. If the see examples that are so complex that they can't conceive of building them, then they lose interest -- and we lose a powerful motivator.

    Sutherland's dissertation includes the line, "It is to be hoped that future work will far surpass this effort". Alan says we haven't.

    Eventually, Kay's talk got around to showing off some of the work he and his crew have been doing at Squeakland, a science, math, and computing curriculum project built on top of the modern, open source version of Smalltalk, Squeak. One of the key messages running through all of this work can be found in a story he told about how, in his youth, he used to take apart an old Model T Ford on the weekend so that he could figure out how it worked. By the end of the weekend, he could put it back together in running condition. We should strive to give our students the same experience in the computing environments they use: Even if there's a Ferrari down there, the first thing you see when you open the hood is the Model T version -- first-order theories that, even if they throw some advanced ideas away, expose the central ideas that students need to know.

    Alan demoed a sequence of increasingly sophisticated examples implemented and extended by the 5th- and 6th-grade students in B.J. Conn's charter school classes. The demos in the Educators' Symposium keynote were incredible in their depth. I can't do them justice here. The best you can do is to check out the film Squeakers, and even that has only a small subset of what we saw in Vancouver. We were truly blessed that day!

    The theme running through the demos was how students can explore the world in their own experience, and learn powerful ideas at the "hot spot" where math and science intersect. Students can get the idea behind tensor calculus long before they can appreciate the abstractions we usually think of as tensor calculus. In the course of writing increasingly complex scripts to drive simulations of things they see in the world, students come to understand the ideas of the variable, feedback, choice, repetition, .... They do so by exposing them in action, not in the abstract.

    The key is that students learn because they are having fun exploring questions that matter to them. Sometime along in here, Kay uttered what was for me the Quote of the Day, no, the Quote of OOPSLA 2004:

    If you don't read for for fun, you won't be fluent enough to read for purpose.

    I experience this every day when interacting with university students. Substitute 'compute' or 'program' for 'read', and you will have stated a central truth of undergraduate CS education.

    As noted above, Kay has a strong preference for simple, powerful ideas over complex ideas. He devoted a part of his talk to the Hubris of Complexity, which he believes long ago seduced most folks in computing. Software people tend to favor the joy of complexity, yet we should strive for the joy of simplicity.

    Kay gave several examples of small "kernels" that have changed the world, which all people should know and appreciate. Maxwell's equations were one. Perhaps in honor of the upcoming U.S. elections, he spent some time talking about the U.S. Constitution as one such kernel. You can hold it in the palm of your hand, yet it thrives still after 225 years. It is an example of great system design -- it's not law-based or case-based, but principle-based. The Founding Fathers created a kernel of ideas that remains not only relevant but also effective.

    I learned a lot hearing Alan tell some of the history of objects and OOP. In the 1960s, an "object" was simply a data structure, especially one containing pointers. This usage predates object-oriented programming. Alan said that his key insight was that an object could act as a miniature software computer -- not just a data structure, not just a procedure -- and that software scales to any level of expression.

    He also reminded us of something he has said repeatedly in recent years: Object-oriented programming is about messages, not the objects. We worry about the objects, but it's the messages that matter.

    How do we make messages the centerpiece of our introductory courses in computing?

    Periodically throughout the talk, Alan dropped in small hints about programming languages and features. He said that programming language design has a large UI component that we technical folks sometimes forget. A particular example he mentioned was inheritance. While inheritance is an essential part of most OOP, Alan said that students should not encounter it very soon in their education, because it "doesn't pay for the complexity it creates".

    As we design languages and environments for beginners, we can apply lessons from Mihaly Csikszentmihalyi's idea of "flow". Our goal should be to widen the path of flow for learners. One way to do that is to add safety to the language so that learners do not become anxious. Another is to create attention centers to push away potential boredom.

    Kay's talks were full of little nuggets that I jotted down and wanted to share:

    • He thanked many people for participating in the creation of his body of work, sharing with them his honors. Most notably, he thanked Dan Ingalls "for creating Smalltalk; I only did the math". I have often heard Alan single out Ingalls for his seminal contributions. That makes a powerful statement about the power of teams.

    • Throughout the talk, Alan acknowledged many books and ideas that affected him. I couldn't get down all of the recommended reading, but I did record a few:
      • Bruce Alberts, "Molecular Biology of the Cell"
      • Lewis Carroll Epstein, "Relativity Visualized"
      • James D. Watson, "Molecular Biology of the Gene"

    • Clifford Shaw's JOSS was the most beautiful programming language ever created. I've heard of JOSS but now have to go learn more.

    • Whatever you do, ask yourself, "What's the science of it?" You can replace 'science' with many other words to create other important questions: art, math, computing, civilization.

    • "In every class of thirty 11-year-olds, there's usually one Galileo kid." What can we do to give him or her the tools needed to redefine the world?

    Alan closed both talks on an inspirational note, to wrap up the inspiration of what he had already said and shown us. He told us that Xerox PARC was so successful not because the people were smart, but because they had big ideas and had the inclination to pursue them. They pursued their ideas simple-mindedly. Each time they built something new, they asked themselves, "What does the complexity in our system buy us?" If it didn't buy enough, they strove to make the thing simpler.

    People love to quote Alan's most famous line, "The best way to predict the future is to invent it." I leave you today with the lines that follow this adage in Alan's paper "Inventing the Future" (which appears in The AI Business: The Commercial Uses of Artificial Intelligence, edited by Patrick Henry Winston and Karen Prendergast). They tell us what Alan wants us all to remember: that the future is in our hands.

    The future is not laid out on a track. It is something that we can decide, and to the extent that we do not violate any known laws of the universe, we can probably make it work the way that we want to.

    -- Alan Kay, 1984

    Guys like Alan set a high bar for us. But as I noted last time, we have something of a responsibility to set high goals when it comes to computing. We are creating the medium that people of the future -- and today? -- will use to create the next Renaissance.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    November 03, 2004 3:47 PM

    Other Folks Comment on OOPSLA

    Some other folks have blogged on their experiences at OOPSLA last week. These two caught my attention, for different reasons.

    The trade-off between power and complexity in language

    Nat Pryce writes about stopping on the SeaSide BoF. SeaSide is a web app framework written in Smalltalk. Nat marvels at its elegance and points out:

    SeaSide is able to present such a simple API because it takes advantage of "esoteric" features of its implementation language/platform, Smalltalk, such as continuations and closures. Java, in comparison to Smalltalk, is designed on the assumption that programmers using the Java language are not skilled enough use such language features without making a mess of things, and so the language should only contain simple features to avoid confusing our poor little heads. Paradoxically, the result is that Java APIs are overly complex, and our poor little heads get confused anyway. SeaSide is a good demonstration that powerful, if complex, language features make the job of everyday programming easier, not harder, by letting API designers create elegant abstractions that hide the complexity of the problem domain and technical solution.

    There is indeed great irony in how choosing the wrong kind of simplicity in a language leads to unnecessary complexity in the APIs and systems written in the language. I don't have an opportunity to teach students Smalltalk much these days, but I always hope that they will experience a similar epiphany when programming in Scheme.

    Not surprisingly, Alan Kay has a lot to say on this topic of simplicity, complexity, and thinking computationally, too. I hope to post my take on Alan's two OOPSLA talks later this week.

    Making Software in a Joyous World

    You gotta love a blog posting subtitled with a line from a John Mellencamp song.

    Brian Marick writes about three talks that he heard on the first day at OOPSLA. He concludes wistfully:

    I wish the genial humanists like Ward [Cunningham] and the obsessive visionaries like Alan Kay had more influence [in the computing world]. I worry that the adolescence of computers is almost over, and that we're settling into that stagnant adulthood where you just plod on in the world as others made it, occasionally wistfully remembering the time when you thought endless possibility was all around you.

    In the world where Ward and Alan live, people use computing to make lives better. People don't just consume ideas; they also produce them. Ward's talk described how programmers can change their world and the world of their colleagues by looking for opportunities to learn from experience and creating tools that empower programmers and users. Alan's talk urged us to take that vision out into the everyone's world, where computers can serve as a new kind of medium for expressing new kinds of ideas -- for everyone, not just software folks.

    This are high goals to which we can aspire. And if we don't then who else will be able to?


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

    October 30, 2004 11:44 AM

    FrameGames!

    Steve Metsker and Bill Wake led a fun games session at the Educators Symposium last week. Not only did we play four games, but Steve and Bill taught us a way to generate more games of the same sort. The games we played were instantiations of what they call frames, which work a lot like frameworks in object-oriented programming. Each frame is a partially-defined game, with the primary control rules built into the frame. To create a new game, you simply fill in the missing content on which the rules act. You can, of course, specialize or override one or more of the rules if that best suits your context.

    As an example, consider the first frame we learned, Best Fit, which is adapted from the game Apples to Apples. Apparently, Apples to Apples has been a popular party game in recent years, though I hadn't heard of it. In Best Fit, the game master sets up the game by creating a set of five adjectives, say, "confusing", "underrated", "specialized", "inspiring", and "weird". Players break off into groups of five or so. They create twenty noun cards, say, the names of twenty famous computer scientists. (Can you name 20? That seemed to be a challenge for some folks at the symposium!) The name cards are shuffled and dealt to the players.

    Play proceeds as a sequence of 'tricks' in which one of the adjective cards is revealed and each player plays the card from his or her hand that best fits the adjective. Players take turns sitting out of tricks and acting as judge -- which player has played the noun that best fits the adjective in play? Variations include allowing lobbying or not and disqualifying the last card played from judging (in order to encourage rapid recognition and a little excitement, I presume). Winning and losing the game is less important than the thinking that goes into playing nouns and judging fit. Though we didn't discuss debriefing the game, I imagine that doing so would give the class as a whole a chance to explore the content of the game at a deeper level.

    From the teacher's perspective, the idea behind the frame is that the adjectives and nouns can be just about anything. We played a second instance in which the nouns were algorithms and the adjectives were big-oh classes and other characterizations of of algorithms. Creating a new game is as easy as choosing suitable nouns and adjectives from any area of study.

    The second frame we learned is Envelopes, attribute to a fellow named Thiagi. In this game, three groups of five players or so each compete to produce the best answer to a question written on the outside of an envelope. The game master prepares three envelopes, each with a different question on the outside. In round one, each team is given an envelope, drafts an answer to its question, writes it on an index card or sheet of paper, and stuffs it into the envelope. For the second round, each team passes its envelope to another group, which does the same thing the first team did, without looking inside the envelope. In the third round, the envelopes are again passed to the group that has not yet seen the question. Instead of answering the question, this time the group takes the two answers from the envelope and judges which one is better.

    Teams score points for having their answers selected as the winner. The game can consist of any number of rounds. Again, debriefing may well add learning value, as players explore why one answer was judged better than another or what the nuances in the question are. And, again from the teacher's perspective, creating new instances of Envelopes is as easy as drafting three questions that require some thought and that can lead to useful discussion of course material. Heck, three good midterm questions might do the job just fine.

    Maybe I will break the tension of one of my classes later this semester with an instance of one of these games.

    Bill said that he will post the slides of their presentation to us on his XPlorations web site soon.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    October 30, 2004 11:26 AM

    Knowing and Seeing

    Brian Marick's talk on methodology work as ontology work sounds more forbidding than it is. His central point is this: whenever adopt a particular software development approach, or create a new one (that's the methodology part), we implicitly take on, or promote, a particular way of seeing the world (that's the ontology part). One implication of this is that communication between folks in different methodology camps -- say, XPers and more traditional software engineers -- is often difficult precisely because these folks see the world differently. They can look at the same project or team or testing scenario and see different things. To help such folks communicate better, you must often help them to build a common vocabulary, or at least recognize the names collisions in their individual vocabularies.

    Coming to know is to change how we perceive the world.

    Those of us in AI community came to learn this up close and personal through the 1970s, '80s, and early '90s. While many of us built useful and usable knowledge-based systems using techniques that codify knowledge in the form of rules or other explicit structures, we soon learned the limits of these approaches. Experts don't have rules for doing their job; they just do it. The same is true for everyday knowledge. You don't have an algorithm for determining whether you are tired. You merely know it.

    Merely. In learning, you lose the rules and learn to see something new.

    Another implication of this view is that we can often use perception to drive behavior better than rule-based prescriptions. Brian gave two examples:

    • Big Visible Charts are a better way for a team to know the bugs that need to be fixed than annotations in code like "TODO" comments.
    • JUnit's red bar/green bar metaphor is better than just a numeric output on a list.

    As always, I liked Brian's talk for his broad scholarship and skills making connections. For example, I learned that scientists like research programs:

    • that have a "hard core", a small, tightly connected set of assumptions. This point echoed Alan Kay's love of "kernels" such as Maxwell's equations.
    • that work out consequences of their theories and 'ignore' counterexamples as potential refutation. That sounds counter to what we often learn about the falsifiability of scientific theories, but it is really a sign of a deep commitment to the simple idea that underlies a theory.
    • that make novel or surprising predictions which are subsequently confirmed. One such confirmation can often trump dozens of little counterexamples based on the details of a theory's explication.

    We have a lot to learn from the philosophy of science.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    October 26, 2004 10:41 AM

    Educators' Symposium Success

    The Educators' Symposium is over. I'll have to think deeply about Alan Kay's talk and the follow-up panel to say much on the central challenges posed by the day, but I can at least say that the day went pretty well.

    Alan's talk went longer than planned, but no one complained. A few questions then took us another half-hour over schedule. We made some adjustments in the line-up, tightened up the talk slots, and things fell into place.

    The technical papers were stronger than in some past years. We received many strong submissions, and the papers that made the cut held some important ideas for the audience. Some -- like teaching interfaces before inheritance, by Axel Schmolitzky, and teaching collections before arrays, by Adrienne Decker -- are small steps, for sure, but steps that many folks still need encouragement to take. Others, like how teaching event-driven programming first can improve our coverage even of traditional procedural topics such as for loops, by Kim Bruce, are less obvious jumps that some folks are taking. They show that the universe of choices is larger than you might think.

    The Framegames! activity by Steve Metsker and Bill Wake was intense but fun. It showed how teachers can use simple game shells (like instantiable classes) to create interactive class sessions with relatively little fuss. I don't yet use games of this sort in my classes very often, but I should probably try them out sometime. Students might appreciate the chance to take a break from my intense pace in class. You will be able to find notes on the games at Bill Wake's web site within the week.

    The closing session of "sound bytes", one-minute ideas for improving instruction, started slowly but picked up. Unfortunately, it spun a bit off-topic at the end, with lots of 'me-too's added in. This was the first time Joe Bergin had tried this idea live, and it showed some promise. To be honest, though, the result of this activity is a common phenomenon at Educators' Symposia: open mike time at the end often doesn't live up to its promise, because people are both tired and overstimulated with ideas from the long day. But sometimes people just need a chance to speak, and they miss it if you take it away.

    So, after all the work I and my committee put in this year, the symposium came off a success.

    My strengths showed up in the planning: well organized, ambitious but not too much so, and being connected to folks with good ideas and a willingness to server.

    And my weaknesses showed through, too, though more in the execution. For example, I get so into the moments of the workshop that I don't do very good job setting up beginnings and ends, such as wrapping things up gracefully and getting the next scheduled item off promptly enough. I have big plans in my mind, and may even prepare to execute them, but then I lose steam at the end.

    I've been asked to chair the Educators Symposium again in 2005. I know now how to do some of the little things better next time, but can I do the big things as well or better? Do I have an idea of where we should take the next Symposium? I'm too tired tonight (Monday night) to say for certain. But maybe -- if I can get the right folks to help me again. San Diego is awfully nice this time of year.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    October 22, 2004 9:16 AM

    Dijsktra on Academic Conservatism

    A few different threads at work this week have me thinking again of Edsger Dijkstra's Turing Award lecture, "The Humble Programmer". (I first blogged on this back in July.)

    The primary trigger was a discussion yesterday of a grant proposal two of my colleagues were preparing. They proposed to teach our first-year courses using a studio approach inspired by the arts and architecture. The essence of the software studio is the concept of reflective practice. Students and instructors work on projects in a common space, with continuous interaction and continuous questioning. Students build things, critique their work and the work of others, and defend their work. A major goal of this approach is to help students develop the analytic and the artistic skills that are the hallmark of the professions. Ideally, students learn how to build, innovate, and invent.

    That is my own summary of what a software studio might do, not my colleagues'. You see, I've been interested in this approach for my whole career as a CS professor. My interest goes back to my time as an undergrad architecture major in the wonderful B.Arch. program at Ball State University. I began using a studio approach in my senior-level Intelligent Systems course back in 1995. (You can read more of my thoughts in the most recent incarnation of that course.) I've used the idea in other courses, too, though not in my freshman course.

    So I was quite excited that my colleagues were considering a change in how we teach introductory software development. Studio approaches work in other disciplines, and I have seen the results. Students become used to the give-and-take of constant evaluation, and they do become more deliberate creators. By spending time in a lab rather than a dead classroom, students develop better work habits and more facility with their tools.

    But the reaction of other faculty members was predictable. Should we be experimenting with our students' education? What do we do when this adversely affects retention? Or even recruitment? The questions were many and various. Maybe these questions reflected healthy skepticism, but they felt a little like an attempt to discourage a new idea.

    Frankly, given how poorly we seem to be doing educating a broad range of undergraduates to be effective software developers, I have a hard time thinking change is a bad idea.

    What about all this made me think of Dijkstra? His Turing lecture proposed a new way of thinking about and doing computing, too, and he was aware that some people would reject his proposal:

    As [with] each serious revolution, it will provoke violent opposition and one can ask oneself where to expect the conservative forces trying to counteract such a development. I don't expect them primarily in big business, not even in the computer business; I expect them rather in the educational institutions that provide today's training ...

    What Dijkstra saw in the early 1970s is every bit as true today. Whatever their political reputations, academics are remarkably reactionary when it comes to their disciplines. This may seem strange in a field like computer science, where the technical horizon advances steadily. But how we think about programming, programming languages, and our courses has really changed very little in the last 20 years.

    That's one reason I've had Dijkstra's talk on my mind this week. Discussions of classic AI papers with a graduate student are another. And a third is that I'll be heading to OOPSLA tomorrow...


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    October 04, 2004 5:44 PM

    Pleasant Surprises

    My Agile Software Development class is doing a big course project. The students are creating the beginnings of a content management system for me, with an initial eye toward me writing content for a book and their program generating a web site for the content. I don't know how far we will get this semester, but that is part of the fun of the project.

    Most of our students have spent a year learning Java and a semester learning Ada, so those tend to be the languages in which they want to program. I expected most folks to want to work in Java, given all of the tools that support agile development in that language, starting with JUnit but extending to Eclipse, Ant, and others. (Actually, I knew that a few would probably want to program in C, but tool support for agile approaches in C is still weak. Go, Ale, go!)

    Imagine my surprise when a four-person team asked me if they could use Scheme. They had all learned Scheme in my Programming Languages course, so this wasn't an impossible request. But only rarely does Scheme attract a student's fancy in my course... The language is just so different from what they know that, even if they come to appreciate its power, they usually don't grok how they might use it as a development language on a "real" project. These students had all done well in the Scheme part of Programming Languages, but they hadn't expressed any deep love for the language at the time.

    So my initial reaction was, "Are you jerking my chain?" But they insisted they weren't, that Scheme seemed like the right tool for the job at hand: transforming data from one form to another, with flexible parsing underneath. So I let out a little bit more chain and said, "Well, you'll have to get a unit testing framework so that you can write your tests first..." I had mentioned SchemeUnit in passing earlier in the course but hadn't told them just how nice unit testing can be in a dynamically typed and highly flexible language like Scheme. They said, "No problem."

    They did it. The team submitted the build of its first iteration last Friday. They have three dozen or so tests and three of four domain modules. The code looks good. We'll see what happens by the end of the first "official" release of their code -- two more iterations to go -- but I've graduated from cynical skepticism to guarded optimism.

    This is enough to restore my faith in humanity. To quote ranch owner Clay Stone from one of my favorite movies, City Slickers, I'm as happy as a puppy with two peters.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    September 25, 2004 3:58 PM

    Jobs and Beauty at the University

    My last entry ended with the realization that helping folks, especially students, adopt agile development methods comes down to motivation. That's a much different task than presenting course content objectively and clearly. Often students get it, they just don't get around to doing it.

    I don't have any answers to this puzzle yet, but it reminded of Tall, Dark, and Mysterious's recent blog on universities and job training. TDM is a math professor in Canada, and she confronts the fact that, while professors and universities often prefer to paint academic life as a pure intellectual journey, most students are at the university to earn a job credential. This creates a motivational gap between what students seek and what their courses sometimes want to teach.

    Computer science occupies a different part of the spectrum from majors like math or history or literature. Many of the skills and tools that we teach have direct application in industry. Courses in programming, software development, database, and the like all can introduce "practical" material. But in many ways this makes our situation more difficult. I'm guessing that most history and literature majors aren't there for job skills, at least not in the superficial way that, say, a CS student may want to study networking. But when our networking courses go theoretical, students' interest can begin to wander, because they don't see the real value.

    Don't get me wrong -- most of the students I've dealt with have conscientiously continued to work through the theoretical stuff. But I know that, in some sense, they are humoring me.

    I agree with TDM that we should be aware of our students' goals and take them into account when we design our courses. Why not let students know that some of the theory we're studying applies to real problems? In my algorithms course last week, I enjoyed being able to point out how we can encounter the Knapsack Problem in maximizing profits in an Internet-based auction. After seeing the naive brute-force solution to this problem, and connecting the problem to a real-life problem, perhaps students will better appreciate the more complex but efficient algorithms we will study later.

    That said, there are some beautiful ideas in computing that don't necessarily have a direct application in current practice, and I also love to have students encounter their beauty. I always hope that, if I do a good job, at least some of the students will appreciate some of those neat ideas for their own sake and realize that the university can be about more than just earning a job credential.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    September 24, 2004 1:56 PM

    No, Really, Why?

    My message earlier today was just a riff on the quote I began with. I was in an especially "why not?" kind of mood. As I walked to lunch, though, I knew that many of my students, including many of the better ones, would be unfazed by my rhapsody. They have plenty of reasons for resisting the switch to TDD. And those reasons seem quite powerful to them. Let's consider two.

    It takes too much time. Students don't always have the luxury of time when designing, implementing, and debugging an assignment. The program is due in a week or two, and so they spend most of their time working just to write a program that works. Evidence such as "TDD takes 15% longer and results in 30% fewer defects" doesn't provide much motivation to do TDD when students don't think they *have* 15% more time. They'll take their chances with working on what really matters, which is the program. Requiring students to submit their tests and then grading them, too, may motivate them, but I'd like to hear from folks who have tried that before deciding that it really works -- or whether students just view it as an extra burden, an 'unfunded mandate' from the instructor.

    Old habits die hard, if at all. Even if convinced of the value of TDD, many people find the change in habit to be a difficult obstacle to surmount. Changing habits takes discipline, support, and time. Instructors aren't usually with students enough at the times they program to help with the discipline, so our ability to provide support is compromised. When the pressure is on, or when faced with a challenging tasks, people tend to fall back on what they know best, what feels comfortable -- even if they aren't confident that the old ways will work! As an instructor, I find it most frustrating to watch students fall back on practices they know will fail, but I realize that this is simply human nature. Without changing the students' environment more radically, effecting certain changes of habit will be a hit-or-miss affair.

    Maybe this comes down to the fact that we who teach need to change the way we do things. Give assignments over longer periods, allowing more time for reflection. This sounds good, but ... What about the stuff we can't do because we now don't have time? It's a commonplace that students may be better off learning less content better, with more growing of the mind, but making that change is a difficult obstacle for teachers to surmount.

    In the end, I wonder how much effect such a change would have anyway. Would all the students' newly freed time be sucked up by their other courses, their jobs, and their ordinary lives?

    Ultimately, this all comes down to motivation, and the best motivation comes from inside the learner. Drawing that desire out is a task for which instructors aren't usually well-prepared. We can learn to do a better job of motivating students, but that takes work on our end. And wouldn't we rather just lecture?


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    September 07, 2004 5:24 PM

    Aptitude as Controlling Factor in Learning to Program

    I'm busily preparing to leave town for PLoP, and I haven't had time to write lately. I also haven't had time to clarify a bunch of loose ideas in my head on the topics of aptitude for computing and the implications for introductory CS courses. So I'll write down my jumble as is.

    A couple of months ago, I read E.L. Doctorow's memoir, Reporting The Universe. As I've noted before, I like to read writers, especially accomplished ones, writing about writing, especially how and why they write. One of Doctorow's comments stuck in my mind. He claimed that the aptitude for math and physics (and presumably computing) is rare, but all people are within reach of writing good narrative. For some reason, I didn't want to believe that. Not the part about writing, because I do believe that most folks can learn to write well. I didn't want to believe that most folks are predisposed away from doing computing. I know that it's technical, and that it seems difficult to many. But to me, it's about communicating, too, and I've always held out hope that most folks can learn to write programs, too.

    Then on a mailing list last weekend, this topic came up in the form of what intro CS courses should be like.

    • One person quoted Donald Knuth from his Selected Papers on Computer Science as saying that people drawn to disciplines where they find people think as they do, and that only 1-2% of all people have 'brains wired for algorithmic thinking'. The poster said that it is inevitable and natural that many folks will take and be turned off by the geeky things that excite computer scientists. He wants to create a first course that gives students glimpse of what CS is really like, so that they can decide early whether they want to study the discipline more deeply.

    • Another person turned this issue of geeky inside out and said that all courses should be designed to help student's distinguish beauty from ugliness in the discipline.

    • A third person lamented that attempts to perfect CS 1 as we teach it now may lead to a Pyhrric victory in which we do a really good job appealing to a vanishingly small audience. He argued that we should go to higher level of abstraction, re-inventing early CS courses so that economics, political science, and biology majors can learn and appreciate the beauty and the relevance of CS.

    These guys are among the best, if not the best, intro CS teachers in the country, and they make their claims from deep understanding and broad experience. The ideas aren't mutually exclusive, of course, as they talk some about what our courses should be like and about the aptitude that people may have for the discipline. I'm still trying to tease the two apart in my mind. But the aptitude issue is stickier for me right now.

    I'm reminded of something Ralph Johnson said at a PLoP a few years ago. He offered as motivation piano teachers. Perhaps not everyone has the aptitude to be a concert pianist, but piano teachers have several pedagogies that enable them to teach nearly anyone to play piano serviceably. Perhaps we can aim for the same: not all of our students will become masters, but all can become competent programmers, if they show some interest and put in the work required to get better.

    Perhaps aptitude is more of a controlling factor than I have thought. Certainly, I know of more scientists and mathematicians who have enjoyed writing (and well) than humanities folks who enjoy doing calculus or computer programming on the side. But I can't help but think that interest can trump aptitude for all but a few, so long as "merely" competence is the goal.


    Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

    September 03, 2004 4:45 PM

    Can You Turn It In Now?

    Conventional wisdom is that many software projects are completed late and over budget. I don't know whether this is true, or to what extent, but few dispute the claim. Within the software engineering community, this assertion is used as motivation to develop more rigorous methods for building software and managing the process. In the popular press, it is viewed with much head shaking and a bit of disdain. Yet people must continue to buy software.

    We face a similar issue in academia. Many professors accept late work for a few days after an assignment is due. Sometimes they assess a penalty, such as 10% of the available points per day. But compassion usually dictates some degree of flexibility with our young charges.

    My grading policy has always been not to accept late work. I tell students to submit their best available work at the time the assignment is due. I also place a lower bound on the acceptable quality of a submission: If a program doesn't compile, or compiles but blows up horribly when run, then the resulting grade will be quite low. I tell students that, all other things being equal, a compilable, runnable, yet incomplete program is more valuable than a program that doesn't compile or run. It's hard for me to have much confidence in what a student knows or has created when even the compiler balks.

    I'm reasonable enough to make exceptions when events warrant them. Sometimes, extenuating circumstances interfere with a student's opportunity to do the assigned work in a timely fashion. Sometimes a reasonably good, nearly complete program causes an exception in an unusual situation that the student doesn't yet understand. But for the most part, the policy stands. Students long ago stopped questioning this rule of mine, perhaps accepting it as one of my personal quirks. But when deadlines approach, someone will usually ask for a break because with just a little more time...

    Of course, I also encourage students to do short iterations and generate many "small releases" as they write programs. If they work systematically, then they can always be in the position of having a compilable, runnable -- if incomplete -- program to submit at every point in a project. I demonstrate this behavior in class, both in project retrospectives and in my own work at the computer. I don't know how many actually program this way themselves.

    These thoughts came to mind earlier this week when I saw a message from Ron Jeffries to the XP mailing list, which appeared in expanded form in his blog as Good Day to Die. Ron considers the problem of late software delivery in industry and wonders,

    What if the rule was this?

    On the day and dollar specified in the plan, the project will terminate. Whatever it has most recently shipped, if it has shipped anything, will be assessed to decide whether the project was successful or not.

    Does that sound familiar? His answer sounds familiar, too. If you program in the way that the so-called agile methods people suggest, this won't be a problem. You will always have a working deliverable to ship. And, because you will have worked with your client to choose which requirements to implement first, the system you ship should be the best you could offer the client on that day. That is fair value for the time spent on the project.

    Maybe my grading policy can help students learn to produce software that achieves this much, whatever its lifespan turns out to be.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    August 30, 2004 10:54 AM

    Serendipity and False Pride

    Yesterday I blogged about a new Rule of Three for the patterns community, taken from Gerald Weinberg's The Secrets of Consulting. Weinberg motivated the rule with a story of how his false pride in being thought smart -- by his students! -- led to ineffective thinking.

    The story of false pride reminded me of one of my favorite scenes in the movie Serendipity, and then of one of my favorite classical quotes.

    In the movie, Jonathan ( John Cusack) throws all sensibility to the wind in an effort to find the woman he fell in love with one afternoon many years ago. His search threatens his upcoming wedding with a beautiful woman and makes everyone think he's nuts. But his best friend, Dean ( Jeremy Piven), sees the search and its attendant risk as something more.

    Dean is married, but his marriage is in trouble. He and his wife have let their problems go on so long that now both are too proud to be the one to make the first move to fix them. When Jonathan wonders out loud if he has gone nuts and should just go home and marry his lovely fiancee, Dean tells him that his search has been an inspiration to work with his wife to repair their relationship. In support of his admiration for Jonathan, he recited a quote from a college humanities course that they shared: "If you want to improve, be content to be thought foolish and stupid..."

    That scene and quote so struck me that, the next day, I had to track down the source. As is usually the case, Google helped me find just what I wanted:

    If you want to improve, be content to be thought foolish and stupid with regard to external things. Don't wish to be thought to know anything; and even if you appear to be somebody important to others, distrust yourself. For, it is difficult to both keep your faculty of choice in a state conformable to nature, and at the same time acquire external things. But while you are careful about the one, you must of necessity neglect the other.

    ... and led me to the source, The Enchiridion, by Epictetus.

    This quote is a recurring source of encouragement to me. My natural tendency is to want to guard my reputation by appearing to have everything under control, by not asking questions when I have something more to learn, by not venturing to share my ideas. Before I started this blog, I worried that people would find what I said shallow or uninteresting. But then I decided to draw my inspiration from Serendipity's Jonathan and step forward.

    Weinberg's book teaches the same lesson throughout: A consultant will live a better life and help their clients more if only they drop their false pride and admit that they don't know all there is, that they can't answer every question.

    And if you like romantic comedies but haven't seen Serendipity yet, then by all means check it out soon.


    Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

    August 29, 2004 4:29 PM

    A New Rule of Three

    Back in the beginning, the patterns community created the Rule of Three. This rule stated that every pattern should have three known uses. It was a part of the community's conscious effort to promote the creation of literature on well-tested solutions that had grown up in the practicing software world. Without the rule, leaders of the community worried that academics and students might re-write their latest theories into pattern form and drown out the new literature before it took root. These folks were not anti-academic; indeed many were and are academics. But they knew that academics had many outlets for their work, and they recognized the need to nurture a new community of writers among practicing software developers.

    The Rule of Three was a cultural rule, though, and not a natural law. Christopher Alexander, in The Timeless Way of Building, wrote that patterns could be drived from both practice and theory. As the patterns community matured, the rule became less useful as a normative mechanism. In recent years, Richard Gabriel has encouraged pattern writers to look beyond the need for three known uses when creating new pattern languages. The most important thing is the role of each pattern in the language to create a meaningful whole. Good material is worth writing.

    I returned to Gerald Weinberg's The Secrets of Consulting this weekend and ran across a new Rule of Three. I propose that pattern writers consider adopting Weinberg's Rule of Three, which he gives in his chapter on "seeing what's not there":

    If you can't think of three things that might go wrong with your plans, then there's something wrong with your thinking.

    At writers workshops, it's not uncommon to read a pattern which, according to its author, has no negative consequences. Apply the pattern and -- poof! -- all is well with the world. The real world doesn't usually work this way. If I use a Decorator or Mutual Recursion, then I still have plenty to think about. A pattern resolves some forces, but not others; or perhaps it resolves the primary forces under consideration but creates new ones within the system.

    If you are writing a pattern, try to think of three negative consequences. You may not find three, but if you can't find any then either you are not thinking far enough or your pattern isn't one; it's a law of the universe.

    Authors can likewise use this rule as a reminder to develop their forces more completely. If a pattern addresses few forces, then the reader will rightly wonder if the "problem" is really a problem at all. Or, if all the forces point in one direction, then the problem doesn't seem all that hard to solve. The solution is implicit in the problem.

    Weinberg offers this rule as a general check on the breadth and depth of one's thinking, and it's a good one. But I think it also offers pattern writers, new and experienced alike, a much needed reminder that patterns are rarely so overarching that we can't find weaknesses in their armor. And looking for these weaknesses will help authors understand their problems better and write more convincing and more useful pattern languages.

    "Okay, Eugene, I'm game. How exactly do I do that?" Advice about the results of thinking are not helpful when it's the actual thinking that's your problem. Weinberg offers some general advice, and I'll share that in an upcoming entry. I'll also offer some advice drawn from my own experience and from experienced writers who've been teaching us at patterns conferences.


    Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

    August 19, 2004 1:50 PM

    Multiple Iterations and Care for Programs

    Fall semester is in the air. The students are beginning to arrive. As I prepare for my courses, I've been thinking about some issues at the intersection of agile development, programming style, thinking ahead, cognitive styles, and technology.

    Many computer science faculty pine for the good old days when students had to plan their programs with greater care. To this way of thinking, when students had to punch programs on cards and submit them for batch processing at the computer center, they had to really care about the quality of their code -- think through their approach ahead of time, write the code out by hand, and carefully desk-check it for correctness and typos. Then came the era of interactive computing at the university -- when I was an undergrad at Ball State, this era began with the wide availability of DEC Vax terminals -- and students no longer had to be careful. Professors love to tell stories from those Vax days of receiving student submissions that were Version 132 of the program. This meant that the student had saved the program 132 times and presumably compiled and run the program that many times as well, or close to it. What could be a worse sign of student attention to planning ahead, to getting their programs right before typing?

    I've never held this view in the extreme, but I used to lament the general phenomenon. But my view is now mixed, at best, and moving toward a more agile perspective. Under what conditions would I want my students to save, compile, and run a program 142 times?

    In agile development, we encourage taking small steps, getting frequent feedback from the program itself, and letting a program evolve in response to the requirements we implement. In test-driven development, we explicitly call for compiling and running a program even when we expect a failure -- and then adding functionality to the program to make the test pass.

    If my students program this way, then they will necessarily end up with many, many saves and compiles. But that would be a good thing, and at every step along the way they would have a program that deserves partial credit for a correct but incomplete solution.

    In order for this approach to be desirable, though, students need to do more than just code, compile, and run. They will need to add individual features to their programs in a thoughtful, disciplined way. They will need to do some testing at each step, to ensure that the new feature works and that all older features still work. They will need to continuously re-work the design of their program -- refactor! -- as the design of the program evolves. And all of these take time. Not the frenzied iterations of a student whose program is due tomorrow morning, but the intentional iterations of a programmer in control.

    To me, this is the biggest difficulty in getting students to program in an agile style. Students are so used to procrastinating, to doing triage on their to-do lists in order to get the most urgent project done first. Unfortunately, many also give higher priority to non-school activities all too often. I am always on the look-out for new ways to help students see how important time is in creating good programs, whether by planning carefully ahead or by going through many planned iterations. Please share any ideas you have.

    So, many iterations isn't a problem in itself, but rather a style in which those iterations are the result of the seemingly random modify-and-compile approach that many students seem to fall into when a program gets tough. Part of our job as teachers is helping students learn discipline in attacking a problem -- more so than teaching them any particular discipline itself.

    Why the mention of cognitive styles above? A blog entry by Clive Stephenson brought this topic to the front of my mind a few weeks ago, and it wasn't about programming at all, but about writing more generally:

    ... Is there any difference between our cognitive styles when we write longhand, versus typing on a keyboard?

    Since I type about 70 words per minute, I can type practically as fast as I can compose sentences in my head. So does the much-slower pace of handwriting actually create a different way not just of writing, but of thinking? Does the buffer buildup between my brain and my arm affect things?

    What I mean is this: When I'm typing, because I can generate text so fast, I'll toss lots of stuff out on the page -- and then quickly edit or change it. But when I'm writing by hand, because it's so much slower I'll try to compose the sentence in my head before trying to write it. With a keyboard, I sort of offload some of my mental-sorting onto the page, where I can look at the words I've written, meditate on them, and manipulate them. With writing, that manipulation happens before the output. Clearly this would lead to some cognitive difference between the two modes ... but I can't quite figure out what it would be.

    Changes in technology have made it easier for us to get ideas out of our heads and onto paper, or into a computer file. That's true of good ideas and bad, well-formed and inchoate. For many writers, this is a *good* thing, because it allows them to get over the fear of writing by getting something down. That's one of the things I like about having my students use more agile techniques to program: Many students in introductory courses are intimidated by the problems they face, are afraid of not being able to get the Right Answer. But if they approach the problem with a series of small steps, perhaps each small step will seem doable. After a few small steps, they will find themselves well on the way to creating a complete program. (This was also one of my early motivations for structuring my courses around patterns -- reducing fear.)

    Used inappropriately, the technology is simply a way to do a poor job faster. For people whose cognitive style is more attuned to up-front planning, newer technologies can be a trap that draws them away from the way they work best.

    In retrospect, a large number of compiles may be a bad sign, if they were done for the wrong reason. Multiple iterations is not the issue; the process that leads to them is. With a disciplined approach, 142 compiles is the agile way!


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    July 27, 2004 1:46 PM

    Teaching Programming Plans Explicitly

    Another article to catch my eye in the latest SIGCSE Bulletin was "Training Strategic Problem Solvers", by de Raadt, Toleman, and Watson. This brief paper discusses the gap between problem and solution that students face when learning to program. Most programming instruction relies on students to learn problem-solving strategies implicitly through repeated practice, much as they would in mathematics.

    This approach faces a couple of difficulties. First, as I discuss in an earlier entry, CS students don't generally get enough practice for this approach to be effective. Second, and more troublesome, is that this search for strategies occurs in a large state space and thus is prone to dead ends and failures. Most students will end up constructing the wrong strategies or no strategies at all.

    The authors report the results of an informal experiment in which they found that their students, after a semester's worth of programming instruction, had not mastered several simple plans associated with writing a loop to average a sequence of numbers -- a basic CS 1 problem. I've heard many other instructors tell a similar story about the students at their schools. I've seen many proposals for new approaches in CS1 offered to address just this problem.

    So, what's the solution?

    The authors suggest that explicit instruction about goals and plans will lead to better problem solvers. They harken back to Soloway's ground-breaking working on goals and plans as a source for the material that we should be teaching instead.

    I agree wholeheartedly. The elementary patterns community has devoted much of its energy to identifying the plans that novice programmers should learn and the problems that they solve. By making explicit the forces that lead to or away from a particular pattern, we hope that students can better learn to make systematic design decisions in the face of multiple goals.

    Notice the subtle shift in vocabulary. The pattern-oriented approach focuses on the thing to built, rather than the plan for building it. This allows one to recognize that there are often different ways to build the same structure, depending upon the forces of the problem at hand. The plan for building the pattern is also a part of the pattern description. Soloway's plans were, by and large, code structures to be implemented in a program, too. The name "plan" came be thought of in the same way as a pattern when one considers it in the sense of an architectural plan, or blueprint for a structure.

    Teaching patterns or plans explicitly isn't easy or a silver bullet, either. Mike Clancy discusses some of the key issues in his paper Patterns and Pedagogy. But I think that it offers the best hope for us to get it right. I hope that de Raadt, Toleman, and Watson will continue with their idea -- and then tell us the results off their new teaching approach.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    July 24, 2004 4:26 PM

    Writing a Novel in the Agile Way

    This weekend, I re-read Jon Hassler's My Staggerford Journal. Hassler is a novelist of small-town Minnesota life, and My Staggerford Journal is the diary-like story of the writing of his first novel, Staggerford, on a sabbatical from Brainerd Community College. I first read it in the months before my own sabbatical of Fall 2002, in hopes of some energy and inspiration. The results of my sabbatical disappointed me, but this journal did not. I heartily recommend his other novels to readers who like the stories of small-town Midwesterners coming to grips withe the changes of life.

    One paragraph jumped out at me from Hassler's description of what it was like to give birth to the novel he'd wanted -- needed -- to write for twenty years:

    I enjoy working on a second draft better than a first. If I had my choice I would write nothing but second (or later) drafts. But to get to that sometimes pedantic, sometimes exhilirating stage of perfection, polishing, filling in holes, rechanneling streams, etc., one has to struggle through the frightening first draft, create the damn thing through to the end, live it night and day and not know where it's going, or if you do know where it's going, then you don't know if you have the skill or stamina to get it there. It won't get there on its own.

    Those feelings sound familiar to this programmer.

    Hassler's discussions of rewriting brought to mind redesign and refactoring. Of course, Hassler wasn't just refactoring his novel. In the second and third drafts, he made substantive changes to the story being told and to the effect it elicits from his readers. But much of his rewriting sounded like refactoring: changing sentences here and there, even rewriting whole chapters to bring out the real story that the earlier version meant to tell. Hassler certainly writes of the experience as one who was "listening to the code".

    The pain of writing the first draft sounds like a rather un-agile way to develop a novel: creating a whole without knowing where he or the story are going, living in a constant state of uncertainty and discomfort. I have known that feeling as a programmer, and I try to teach my students how to avoid it -- indeed, that it is okay to avoid it.

    I wonder what it would be like to write a novel using an "agile" method? Can we create art in quite that way? I'm not an artist, but somehow I think painting may work that way more than writing a novel.

    Or maybe novelists already move in agile way, with the first draft being reworked in bits and pieces as they go, and later revisions just continuing with the evolution of the work. Maybe what distinguishes Hassler's first draft and his later drafts his more in his mind than in the work?


    Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

    July 20, 2004 2:04 PM

    A Scientist Dressed in Artist's Clothing?

    Over the last couple of months, I've been following the discussion on the Extravagaria wiki. Extravagaria is a workshop organized by Richard Gabriel and Janet Holmes at OOPSLA 2004. It aims to explore how software people can use techniques of the arts "to explore research questions and explain results".

    I am a "Suzuki dad" to my daughter as she learns to play the piano. For the last few weeks, I've been picking up "Zen and the Art of Motorcycle Maintenance" during her lesson and reading a section while she works with her teacher. Yesterday, something I read brought Extravagaria to mind.

    I was reading Chapter 25, in which Pirsig talks about the synthesis of classical and romantic thought. He argues that adding the romantic as a veneer over the classical almost always results in "stylish" but unsatisfying -- even ugly -- results, both the product itself and the experience of users and designers. Instead, the classical and romantic must be united at a more fundamental level, in his notion of Quality. Pirsig then says:

    At present we're snowed under with an irrational expansion of blind data-gathering in the sciences because there's no rational format for any understanding of scientific creativity. At present we are also snowed under with lots of stylishness in the arts -- thin art -- because there's very little assimilation or extension into underlying form. We have artists with no scientific knowledge and scientists with no artistic knowledge and both with no spiritual sense of gravity at all, and the result is not just bad, it is ghastly. The time for real reunification of art and technology is long overdue.

    How much artistic knowledge do scientists require in order to avoid producing ghastly results? Can we just put a "stylish" veneer on our work, or must we study art -- do art -- so that the process is a part of us?

    I sometimes feel as though I am affecting an artistic stance when the substance of my work is little different.

    That isn't to say that I have not benefited from adopting practices from the arts. I learned a lot from Natalie Goldberg's Writing Down the Bones. Since reading it, I have always tried to write a little every day (code and text) as a way to keep my ideas, and my ability to write them down, flowing. One of the reasons that I started this blog was, in part, as an external encouragement to write something of value every day, and not just the surface of an interesting but inchoate thought. Gabriel has been something of an inspiration in this regard, with his "one poem a day" habit.

    I have also certainly benefited from learning to play the piano (well, beginning to learn) as an adult. The acts of learning an obviously artistic skill, talking about it with my teacher, and reading about it have all changed my brain in subtle but useful ways. The change affects how I teach computer science and how I read its literature; I suspect that it has begun to change how I do computer science, too.

    How easily can scientists adopt practices from the arts without 'grokking' them in the artistic sense? I suppose that this is one of the points of Extravagaria.

    If you are interested in this topic, be sure to check out the Extravagaria wiki.


    Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

    July 19, 2004 10:52 AM

    Can I Have Your Autograph?

    I am preparing to teach our undergraduate algorithms course this fall for the first time. My last involvement with this side of an algorithms was as a graduate assistant at Michigan State. This course is much different than my usual focus on programming (object-oriented and funtional mostly) and programming languages, so preparing for it is fun.

    As I've been reading, I've come across Robert Floyd's name many times, and whenever I do I am sure to track down the reference and read yet another paper. I always enjoy them.

    It occurs to me that I am a big fan of Robert Floyd. To be accurate and objective and scientific, I suppose that I should say that I am a big fan of Floyd's work, but that's not what it feels like. It feels more personal than that.

    My attraction to Floyd dates to my discovery of his Turing Award lecture. At the time I was still flush with the idea of elementary patterns, and Floyd's lecture seemed to advocate patterns as a teaching and learning mechanism--not in so many words, of course. The same lecture also encouraged programmers to rewrite their working programs from scratch once they understood the solution well, so that they could isolate the key concepts of the solution. That sounded like refactoring to me. Floyd's goal wasn't just a better program, though, but also a better programmer.

    Lately I've been admiring his papers on sorting networks and random sampling. I've also stumbled across some of his early papers on programming languages, only to discover that, according to Knuth, Floyd developed "the first syntax-directed algorithm of practical importance" and wrote "probably the best paper ever written" on the syntax of programming languages. Simply amazing.

    Now, Floyd is not the only superstar whose work I admire in this way. Alan Kay and Ward Cunningham are two others. I read everything they write and try to grow in the ideas they share.

    When I was growing up, I had posters of my favorite sports stars hanging in my bedroom -- Pete Rose, Johnny Bench, Walt Frazier, George McGinnis. These days, I find myself wanting to post quotes from my favorite computing stars on my office door. In part, I do this so that students can learn from them. But I think I also do it for me, because I am a fan of these guys and enjoy being a fan.

    I hope that there isn't anything wrong with that.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    July 16, 2004 10:46 AM

    Practice for Practice's Sake

    A few days ago, I wrote about the difference between practice for piano students and practice for CS students. As I noted then, this has been a popular topic for software bloggers in recent years. Here are some links:

    As Brian mentions, the value of practice has been a consistent theme of Richard Gabriel's over the last few years, and Dick has inspired a lot of software people to think more about practice.

    One of the things I've been thinking about in this regard is what Chad Fowler called valueless software.

    As a piano student, I often practice things that are not ever intended for performance. I play scales. I work through a set of Finger Power books to improve my dexterity and hand position. I play pieces that were written solely to help students learn to read notes and intervals. Indeed, some days much of my practice consists of pieces that are just for practice. The goal is that I'll be a better player of performance pieces (and I use 'performance' loosely here -- I'll never be a concert pianist!)

    Chad is more of a musician than I, and he says that this sort of practice is essential. The italics are mine.

    Something I learned as a saxophonist is that the less valuable the direct output of that which you practice, the more emphasis you will place on the act of practicing. I can learn more, for instance, from making horrid noises for 30 minutes than I can from learning a Charlie Parker solo and playing it from memory. Why is this? I'm so focused on the output of the Charlie Parker solo--making it sound good, feeling good as a result--that I forget to focus on the process of learning it, and important bits of information are liable to fall through the proverbial cracks.

    Computer science students don't usually get this sort of practice: many, many repetitions of a low-level skill that strengthens the muscles and mind for the "real" task, but which themselves are not useful as products. Can you imagine asking your CS1 student to write 100 for-loops for tomorrow's class? Our students tend to do fewer repetitions, and for the most part we try to make those few "real", in an effort to motivate. (Well, if you count assignments like the Fahrenheit-to-Celsius conversion program as real tasks.)

    I'm not sure if there is any reason to seek a direct analog to the music scenario for learning to programming. We do have an analog, though not quite so low level. It is a venerable practice in computing to reimplement classic programs as learning exercises: a calendar manager, Tetris, a web server -- all have been done to death, and these days we all have ready access to source code for these programs. Yet students write their own for many reasons. Sometimes it's to learn a new language or OS API. But the value lies, in part, in the fact that the student understands the ideas in the app so well that he can focus on the learning task!

    My good friend Joe Bergin has created an object-oriented programming exercise inspired by the idea of practice for practice's sake, which he likens to the musical etude: Write a particular program with a budget of n if-statements or less, for some small value of n. Forcing one's self to not use an if statement wherever it feels comfortable foces the programmer to confront how choices can be made at run-time, and how polymorphism in the program can do the job. The goal isn't necessarily to create an application to keep and use -- indeed, if n is small enough and the task challenging enough, the resulting program may well be stilted beyond all maintainability. But in writing it the programmer may learn something about polymorphism and when it should be used.

    This reminds me of Kent Beck's "Three Bears" pattern, which I revised as a part of some patterns of built-in failure. (These patterns appeared as part of Patterns for Experiential Learning, which was workshopped at EuroPLoP 2001.)


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    July 13, 2004 3:38 PM

    Instructor as Consultant

    I've been reading Gerald Weinberg's classic The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully. I heartily recommend that you read just about everything Weinberg's written, whether you are a software person or not. His books are really about being a better person, whatever you do.

    In The Secrets of Consulting, Weinberg says that he was often upset with clients for being so reluctant to adopt his advice. But then one of his clients enlightened him about the risk differential between the consultant and the client:

    • From the consultant's point of view: If the client ignores his advice, eventually he will lose his contract. If his advice is implemented, then he might be a hero. If it fails, then he will lose the contract -- which would have happened even if his advice had been ignored.

    • From the client's point of view: If he ignores the consultant's advice, then he is no worse off than he currently is. If he implements the advice, then he might be better off, *but he might be worse off*.

    Of course, as psychologists like Kahneman and Tversky have shown us, most people will bear opportunity costs in order to avoid risk, even in the face of potentially big payoffs. In this case, the client needs a pretty good reason to choose to implement the consultant's advice.

    I wonder if this dynamic is at play when students are learning new practices, such as object-oriented design or agile methods? In some ways, the student is in the role of Weinberg's client. If the student is comfortable with how he goes about building software, then he may be averse to change. Adopting a new practice may make his life better -- but it may make it worse! And even if he is not all that happy with how he programs now, he may be happy to maintain the status quo in the face of risk.

    So the new practice either needs to offer the promise of a *big* payoff or guarantee little risk of failure. And human psychology may get in the way of the student risking a change even in that case.

    The instructor is, of course, in the role of Weinberg's consultant, but with a twist: if students don't adopt the new practice, the instructor may not even face the loss of his 'contract'! Instructors don't always lose their courses when students don't learn; sometimes they just muddle along, perhaps trying something new next semester. And as a tenured member of our faculty, I face little chance of losing my job, no matter how poorly my students do.

    Where is the risk? Students may see the instructor as having no stake in the risk of failure.

    Whether this ideas applies directly to the student-teacher relationship or not, I think that it does explain in some part the history of elementary patterns in computer science education. A few years ago, several colleagues and I began our work on so-called elementary patterns, those patterns that relative beginners need and use when writing programs. After writing some papers, we gave several workshops at conferences like SIGCSE, aimed at helping other faculty reorganize their courses using elementary patterns and write elementary patterns of their own.

    The reaction we received surprised me at the time. Our workshops received high ratings at each conference, and participants almost always came up to tell us how much they enjoyed the workshops and to encourage us to do more. But almost no one began to write elementary patterns, and almost no one redesigned their courses around the idea. In retrospect, I realize that we were asking them to make a pretty big change in how they did their jobs. They were in the same situation as Weinberg's client above. The trade-off between success and failure had to balanced against the option of doing nothing and being happy with their current courses.

    In retrospect, I can say that we didn't do a very good job 'selling' the idea of elementary patterns. My good friend Robert Duvall always prodded us to create more and better support materials to ease adoption of our ideas by others, and a few of us are trying to do just that. But I wonder how successful we will be.

    Maybe a more evolutionary approach would work? What would that look like?


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

    July 12, 2004 6:58 PM

    Practice, Practice, Practice

    I began taking piano lessons a couple of years ago. I never learned to play an instrument as a child, and after several years of being the "Suzuki dad" to my two daughters, I decided I'd like to learn myself. Being a student again, especially in an area that doesn't come easily to me, has helped me to develop greater empathy for my computer science students.

    The Suzuki approach itself has been a source of many interesting thoughts about how I might teach programming more effectively. I'll blog on that topic in the future.

    One thing about learning piano that seems obviously different from how my students learn programming is the element of practice. I've read elsewhere on the web where others have talked about the role of practice in learning to write programs and build software. The Pragmatic Programmers come to mind. I've lost my reference to their page on this to subject; if you know it, please pass it on.

    As a piano student, I am expected to practice every day for 20 to 30 minutes. My teacher would like more, but she assures me that a few minutes everyday is better than a long session one day followed by a day off. For a skill so dependent upon muscle memory and strength, daily practice is essential. And I see the damage that lack of practice can do: my progress as a player has been halting, with conference trips and busy times at school and home interrupting my practice routine.

    Computer science students don't tend to get this sort of practice: daily repetitions of the same skills until they become secondhand nature, and then regular brushing up to keep the skill intact. Many of my programming students plan to do their work for my course on only one or two or three days a week. The other days are scheduled for other courses' work, for the work that pays their tuition bills, and for play. Like my piano teacher, I encourage them to try to work on a project a little each day, so that they can keep in touch with the material and have more opportunities to come to understand the material. Their reasons for not doing so sound an awful lot like my excuses for missing daily piano practice.

    One group of students seems the exception to this: all those kids who hack Linux. I lurk on the local Linux users group mailing list, for occasional pointers to interesting new ideas I'd like to learn about. I am amazed by how many hours students put into installing Linux, reinstalling Linux, patching the system, adding drivers for new devices, setting up home networks, upgrading to a new version of the system ... the list goes on. These students get to practice their sysadmin and use skills over and over, in a relatively artificial setting--but one that is so important to them at least in part because they see their own skills grow as a result.

    Now, if I could only get these same students to spend so much time playing with design patterns and algorithms and functional programming and all the stuff I'm trying to teach them. Maybe I need to figure out what sort of "killer app" would draw their attention as strongly to my coursework.


    Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning