Jon Udell recently wrote about the real problem with automating work: most of us don't know how. The problem isn't with using particular tools, which come and go, but with recognizing the power of information networks and putting data into a form from which it can be leveraged.
I want to apply his advice more often than I do. I have found that for me it is not enough simply to learn the principles.
The first challenge is fighting an uphill battle against institutional inertia. The university provides too much of its data in dead-tree form, and what data comes to us in digital form comes unstructured. Despite a desire to increase efficiency and decrease costs, a university is a big, slow organization. It takes a long time for its many bureaucracies to re-make themselves. It also takes a persistent, pervasive effort to change on the parts of many people. Too many administrators and faculty thrive in a papered society, which makes change even harder. This is the broad base of people who need to learn Udell's core principles of information networks.
The second challenge is my own habits. I may not be of my students' generation, but I've been in computer science for a long time, and I think I get the value of data. Even still, it's easy -- especially as department head -- to be sucked into institutional habits. My secretary and I are combating this by trying to convert as much data entering our office as possible into live, structured data. In the process, I am trying to teach her, a non-computer scientist, a bit about the principles of data and structured representation. We aren't worrying yet about networks and pub/sub, simply getting data into a named, structured form that supports computational processing.
Yet I need to change some of my own habits, too. When under time pressure, it's easy for me to, say, whip up assignments of graduate assistants to tasks and lab hours on a legal pad. Once the assignments are made, I can communicate a subset of the information in a couple of e-mail messages. The result is a lot of information and not a byte of structured data. Oh, and a lot of lost opportunities for using code to check consistency, make changes, publish the assignments in multiple forms, or reuse the data in adapted form next semester.
My next big opportunity to practice better what I preach is scheduling courses for spring semester. Instead of using spreadsheets as we have in the past, perhaps I should open up Dr. Racket and use it to record all the data we collect and create about the schedule. Scheme's reliance on the simple list as its primary data structure usually puts me in the mindset of grammars and syntax-driven programming. Sometimes, the best way to break a bad old habit is to create a good new one.
So, yes, we need to teach the principles of data networks in a systematic way to information technologists and everyone else. We also need to practice applying them and look for ways to help individuals and institutions alike change their habits.
One of the students in my just-started Programming Languages course recently mentioned that he has started a company, Glass Cannon Games, to write games for the Box and Android platforms. He is working out of my university's student incubator.
Going a bit further back, I mentioned an alumnus, Wade Arnold, winning a statewide award for his company, T8 Webware. Readers of this blog most recently encountered Wade in my entry on the power of intersections.
Over the last decade, Wade has taken a few big ideas and worked hard to make them real. That's what Nick and, presumably, Ian are doing, too.
Most entrepreneurs start with big thoughts. I try to encourage students to think big thoughts, to consider an entrepreneurial career. The more ideas they have, the more options they have in careers and in life. Going to work for a big company is the right path for some, but some want more and can do their own thing -- if only they have the courage to start.
This is a more important idea than just for starting start-ups. We can "think big and write small" even for the more ordinary programs we write. Sometimes we need a big idea to get us started writing code. Sometimes, we even need hubris. Every problem a novice faces can appear bigger than it is. Students who are able to think big often have more confidence. That is the confidence they need to start, and to persevere.
It is fun as a teacher to be able to encourage students to think big. As writer Roger Rosenblatt says,
One of the pleasures of teaching writing courses is that you can encourage extravagant thoughts like this in your students. These are the thoughts that will be concealed in plain and modest sentences when they write. But before that artistic reduction occurs, you want your students to think big and write small.
Many students come into our programming courses unsure, even a little afraid. Helping them free themselves to have extravagant ideas is one of the best things a teacher can do for them. Then they will be motivated to do the work they need to master syntax and idioms, patterns and styles.
A select few of them will go a step further and believe something even more audacious, that
... there's no purpose to writing programs unless you believe in significant ideas.
Those will be the students who start the Glass Cannons, the Book Hatcheries, and the T8s. We are all better off when they do.
We have survived Week 1. This semester, I again get to teach Programming Languages, a course I love and about which I blog for a while every eighteen months of so.
I had thought I might blog as I prepped for the course, but between my knee and department duties, time was tight. I've also been slow to settle on new ideas for the course. In my blog ideas folder, I found notes for an entry debriefing the last offering of the course, from Spring 2010, and thought that might crystallize some ideas for me. Alas, the notes held nothing useful. They were just a reminder to write, which went unheeded during a May term teaching agile software development.
Yesterday I started reading a new book -- not a book related to my course, but Roger Rosenblatt's Unless It Moves the Human Heart: The Craft and Art of Writing. I love to read writers talking about writing, and this book has an even better premise: it is a writer talking about writing as he teaches writing to novices! So there is plenty of inspiration in it for me, even though it contains not a single line of Scheme or Ruby.
Rosenblatt recounts teaching a course a course called "Writing Everything". Most the students in the course want to learn how to write fiction, especially short stories. Rosenblatt has them also read write poems, in which they can concentrate on language and sounds, and essays, in which they can learn to develop ideas.
This is not the sort of course you find in CS departments. The first analogy that came to mind was a course in which students wrote, say, a process scheduler for an OS, a CRUD database app for a business, and an AI program. The breadth and diversity of apps might get the students to think about commonalities and differences in their programming practice. But a more parallel course would ask students to write a few procedural programs, object-oriented programs, and functional programs. Each programming style would let the student focus on different programming concepts and distinct elements of their craft.
I'd have a great time teaching such a "writing" course. Writing programs is fun and hard to learn, and we don't have many opportunities in a CS program to talk about the process of writing and revising code. Software engineering courses have a lot of their own content, and even courses on software design and development often deal more with new content than with programming practice. In most people's minds, there is not room for a new course like this one in the curriculum. In CS programs, we have theory and applications courses to teach. In Software Engineering programs, they seem far too serious about imitating other engineering disciplines to have room for something this soft. If only more schools would implement Richard Gabriel's idea of an MFA in software...
Despite all these impediments, I think a course in which students simply practiced programming in the large(r) and focused on their craft could be of great value to most CS grads.
I will let Rosenblatt's book inspire me and leak into my Programming Languages course where helpful. But I will keep our focus on the content and skills that our curriculum specifies for the course. By learning the functional style of programming and a lot about how programming languages work, students will get a chance to develop a few practical skills, which we hope will pay off in helping them to be better programmers all around, whether in Java, Python, Ruby, Scala, Clojure, or Ada.
One meta-message I hope to communicate both explicitly and implicitly is that programmers never stop learning, including their professor. Rosenblatt has the same goal in his writing course:
I never fail to say "we" to my students, because I do not want them to get the idea that you ever learn how to write, no matter how long you've done it.
Beyond that, perhaps the best I can do is let my students that I am still mesmerized by the cool things we are learning. As Rosenblatt says,
Observing a teacher who is lost in the mystery of the material can be oddly seductive.
Once students are seduced, they will take care of their own motivation and their own learning. They won't be able to help themselves.
Before reading interviews with Hemingway and Jobs, I read a report of Ansel Adams's last interview. Adams was one of America's greatest photographers of the 20th century, of course, and several of his experiences seem to me to apply to important issues in software development.
It turns out that both photography and software development share a disconnect between teaching and doing:
One of the problems is the teaching of photography. In England, I was told, there's an institute in which nobody can teach photography until they've had five years' experience in the field, until they've had to make a go of it professionally.
Would you recommend that?
I think that teachers should certainly have far more experience than most of the ones I know of have had. I think very few of them have had practical experience in the world. Maybe it's an impossibility. But most of the teachers work pretty much the same way. The students differ more from each other than the teachers do.
Academics often teach without having experience making a living from the material they teach. In computer science, that may make sense for topics like discrete structures. There is a bigger burden in most of the topics we teach, which are done in industry and which evolve at a more rapid rate. New CS profs usually come out of grad school on the cutting edge of their specialties, though not necessarily on top of all the trends in industry. Those who take research-oriented positions stay on the cutting edge of their areas, but the academic pressure is often to become narrower in focus and thus farther from contemporary practice. Those who take positions at teaching schools have to work really hard to stay on top of changes out in the world. Teaching a broad variety of courses makes it difficult to stay on top of everything.
Adams's comment does not address the long-term issue, but it takes a position on the beginning of careers. If every new faculty member had five years or professional programming experience, I dare say most undergrad CS courses would be different. Some of the changes might be tied too closely to those experiences (someone who spent five years at Rockwell Collins writing SRSs and coding in Ada would learn different things from someone who spent five years writing e-commerce sites in Rails), but I think would usually be some common experiences that would improve their courses.
When I first read Adams's comment, I was thinking about how the practitioner would learn and hone elements of craft that the inexperienced teacher didn't know. But the most important thing that most practitioners would learn is humility. It's easy to lecture rhapsodically about some abstract approach to software development when you haven't felt the pain it causes, or faced the challenges left even when it succeeds. Humility can be a useful personal characteristic in a teacher. It helps us see the student's experience more accurately and to respond by changing how and what we teach.
Short of having five years of professional experience, teachers of programming and software development need to read and study all the time -- and not just theoretical tomes, but also the work of professional developers. Our industry is blessed with great books by accomplished developers and writes, such as Design Patterns and Refactoring. The web and practitioners' conferences such as StrangeLoop are an incredible resource, too. As Fogus tweeted recently, "We've reached an exciting time in our industry: colleges professors influenced by Steve Yegge are holding lectures."
Other passages in the Adams interview stood out to me. When he shared his intention to become a professional photographer, instead of a concert pianist:
Some friends said, "Oh, don't give up music. ... A camera cannot express the human soul." The only argument I had for that was that maybe the camera couldn't, but I might try through the camera.
What a wonderful response. Many programmers feel this way about their code. CS attracts a lot of music students, either during their undergrad studies or after they have spent a few years in the music world. I think this is one reason: they see another way to create beauty. Good news for them: their music experience often gives them an advantage over those who don't have it. Adams believed that studying music was valuable to him as a photographer:
How has music affected your life?
Well, in music you have this absolutely necessary discipline from the very beginning. And you are constructing various shapes and controlling values. Your notes have to be accurate or else there's no use playing. There's no casual approximation.
Discipline. Creation and control. Accuracy and precision. Being close isn't good enough. That sounds a lot like programming to me!
Fogus recently wrote a blog entry, Perlis Languages, that has traveled quickly through parts of software world. He bases his title on one of Alan Perlis's epigrams: "A language that doesn't affect the way you think about programming is not worth knowing." Long-time Knowing and Doing readers may remember this quote from my entry, Keeping Up Versus Settling Down. If you are a programmer, you should read Fogus's article, which lists a few languages he thinks might change how you think about programming.
There can be no single list of Perlis languages that works for everyone. Perlis says that a language is worth knowing if it affects how you think about programming. That depends on you: your background, your current stage of development as a programmer, and the kind of problems you work on every day. As an example, in the Java world, the rise of Scala and Clojure offered great opportunities for programmers to expand their thinking about programming. To Haskell and Scheme programmers, the opportunity was much smaller, perhaps non-existent.
The key to this epigram is that each programmer should be thinking about her knowledge and on the look out for languages that can expand her mind. For most of us, there is plenty of room for growth. We tend to work in one or two styles on a daily basis. Languages that go deep in a different style or make a new idea their basic building block can change us.
That said, some languages will show up lots of peoples' Perlis lists, if only because they are so different from the languages most people know and use on a daily basis. Lisp is one of the languages that used to be a near universal in this regard. It has a strangely small and consistent syntax, with symbols as first-order objects, multiple ways to work with functions, and macros for extending the language in a seamless way. With the appearance of Clojure, more and more people are being exposed to the wonders of Lisp, so perhaps won't be on everyone's Perlis list in 10 years. Fogus mentions Clojure only in passing; he has written one of the better early Clojure books, and he doesn't want to make a self-serving suggestion.
I won't offer my own Perlis list here. This blog often talks about languages that interest me, so readers have plenty of chances to hear my thoughts. I will add my thoughts about two of the languages Fogus mentions in his article.
Joy. *Love* it! It's one of my favorite little languages, and one that remains very different from what most programmers know. Scripting languages have put a lot of OOP and functional programming concepts before mainstream programmers across the board, but the idea of concatenative programming is still "out there" for most.
Fogus suggests the Forth programming language in this space. I cannot argue too strongly against this and have explained my own fascination with it in a previous entry. Forth is very cool. Still, I prefer Joy as a first step into the world of concatenative programming. It is clean, simple, and easy to learn. It is also easy to write a Joy interpreter in your favorite language, which I think is one of the best ways to grok a language in a deep way. As I mentioned in the Forth entry linked above, I spent a few months playing with Joy and writing an interpreter for it while on sabbatical a decade ago.
If you play with Joy and like it, you may find yourself wanting more than Joy offers. Then pick up Forth. It will not disappoint you.
APL. Fogus says, "I will be honest. I have never used APL and as a result find it impenetrable." Many things are incomprehensible before we try them. (A student or two will be telling me that Scheme is incomprehensible in the next few weeks...) I was fortunate to write a few programs in APL back in my undergrad programming languages course. I'm sure if I wrote a lot of APL it would become more understandable, but every time I return to the language, it is incomprehensible again to me for a while.
David Ungar told one of my favorite APL stories at OOPSLA 2003, which I mentioned in my report on his keynote address. The punchline of that story fits very well with the theme of so-called Perlis languages: "They could have done the same thing [I] did in APL -- but they didn't think of it!"
There are modern descendants of APL, but I still think there is something special about the language's unique character set. I miss the one-liners consisting or five or twenty Greek symbols, punctuation, and numbers, which accomplished unfathomable tasks such as implementing a set of accounting books.
I do second Fogus's reason for recommending APL despite never having programmed in it: creator Kenneth Iverson's classic text, A Programming Language. It is an unusually lucid account of the design of a programming language -- a new language, not an adaptation of a language we already know. Read it. I had the wonderful opportunity to meet Iverson when he spoke at Michigan State in the 1980s, as described in my entry on Iverson's passing.
... So, I encourage you to follow the spirit of Fogus's article, if not its letter. Find the languages that can change how you think, and learn them. I begin helping a new set of students on this path next week, when we begin our study of Scheme, functional programming, and the key ideas of programming languages and styles.
This post is a mild and perhaps petulant rant about shackling free software. Feel free to skip it if you like.
I've been setting up a new iMac over the last couple of days. I ran into some difficulties installing Remind, a powerful text-based Unix calendar program, that made me sad.
First of all, I need to say "thank you" to the creator of Remind. He wrote the first version of the program back in the 1970s and has maintained and updated it over the last 30+ years. It has always been free, both as in beer and as in speech. Like many Unix folks, I became a devoted user of the program almost as soon as I discovered it.
Why am I sad? When I went to download the latest version, the server detected that I was connecting via a Mac browser and took me to a page that said only not to use Remind on an Apple product. I managed to download the source but found its compressed format incompatible with the tools, both apps and command-line programs, that I use to unstuff archives on my Mac. I finally managed to extract the source, build it, and install it. When Remind runs on my new machine, the program displays this message:
You appear to be running Remind on an Apple product. I'd rather that you didn't. Remind execution will continue momentarily.
... and delays for 30 seconds.
Wow, he really is serious about discouraging people from running his program on an Apple machine.
This is, of course, well within his rights. Like many people, he feels strongly about Apple's approach to software and apps these days. On the Remind home page, he writes:
Remind can be made to run under Mac OS X, but I prefer you not to do that. Apple is even more hostile than Microsoft to openness, using both technical and legal means to hobble what its customers and developers are allowed to do. If you are thinking of buying an Apple product, please don't. If you're unfortunate enough to already own Apple products, please consider switching to an open platform like Linux or FreeBSD that doesn't impose "1984"-like restrictions on your freedom.
I appreciate his desire to support the spirit of free software, to the point of turning long-time users away from his work. When I have downloaded earlier versions of Remind, I have noticed and taken seriously the author's remarks about Apple's closed approach. This version goes farther than offering admonition; it makes life difficult for users. I have always wondered about the stridency of some people in the free software community. I understand that they feel the only way to make a stand on their principles is to damage the freedom and openness of their own software. And they may be right. Companies like Microsoft and Apple are not going to change just because an independent software developer asks them to.
Then again, neither am I. I do take seriously the concerns expressed by Remind's author and others like him. The simple fact, though, is that I'm not likely to switch from my Mac because I find one of my command-line Unix tools no longer available. I have concerns of my own with Apple's approach to software these days, but at this point I still choose to use its products.
If it becomes too difficult to install the new versions of Remind, what will I do? Well, I could install the older version I have cached on my machine. Or perhaps I'll run a script such as rem2ics to free my data from Remind's idiosyncratic representation into the RFC2445 standard format. Then I would look for or write a new tool. Remind's author might be pleased that I wouldn't likely adopt Apple's iCal program and that I would likely make any tool I wrote for myself available to the rest of the world. I would not, however, tell users of any particular platform not to use my code. That's not my style.
I may yet choose to go that route anyway. As I continue to think about the issue, I may decide to respect the author's wishes and not use his program on my machine. If I do so, it will be because I want to show him that respect or because I am persuaded by his argument, not because I have to look at a two-line admonition or wait 30-seconds every time I run the program.
Note: I could perhaps have avoided all the problems by using a package manager for Macs such as homebrew to download and install Remind. But I have always installed Remind by hand in the past and started down that path again this time. I don't know if homebrew's version of Remind includes the 30-second delay at execution. Maybe next time I'll give this approach a try and find out.
Another of the interviews I've read recently was The Rolling Stone's 1994 interview with Steve Jobs, when he was still at NeXT. This interview starts slowly but gets better as it goes on. The best parts are about people, not about technology. Consider this, on the source Jobs' optimism:
Do you still have as much faith in technology today as you did when you started out 20 years ago?
Oh, sure. It's not a faith in technology. It's faith in people.
Technology is nothing. What's important is that you have a faith in people, that they're basically good and smart, and if you give them tools, they'll do wonderful things with them. It's not the tools that you have faith in -- tools are just tools. They work, or they don't work. It's people you have faith in or not.
I think this is a basic attitude held by many CS teachers, about both technology and about the most important set of people we work with: students. Give them tools, and they will do wonderful things with them. Expose them to ideas -- intellectual tools -- and they will do wonderful things. This mentality drives me forward in much the same way as Jobs's optimism does about the people he wants to use Apple's tools.
I also think that this is an essential attitude when you work as part of a software development team. You can have all the cool build, test, development, and debugging tools money can buy, but in the end you are trusting people, not technology.
Then, on people from a different angle:
Are you uncomfortable with your status as a celebrity in Silicon Valley?
I think of it as my well-known twin brother. It's not me. Because otherwise, you go crazy. You read some negative article some idiot writes about you -- you just can't take it too personally. But then that teaches you not to take the really great ones too personally either. People like symbols, and they write about symbols.
I don't have to deal with celebrity status in Silicon Valley or anywhere else. I do get to read reviews of my work, though. Every three years, the faculty of my department evaluate my performance as part of the dean's review of my work and his decision to consider for me another term. I went through my second such review last winter. And, of course, frequent readers here have seen my comments on student assessments, which we do at the end of each semester. I wrote about assessments of my spring Intelligent Systems course back in May. Despite my twice annual therapy sessions in the form of blog entries, I have a pretty good handle on these reviews, both intellectually and emotionally. Yet there is something visceral about reading even one negative comment that never quite goes away. Guys like Jobs probably do there best not to read newspaper articles and unsolicited third-party evals.
I'll have to try the twin brother gambit next semester. My favorite lesson from Jobs's answer, though, is the second part: While you learn to steel yourself against bad reviews, you learn not to take the really great ones too personally, either. Outliers is outliers. As Kipling said, all people should count with you, but none too much. The key in these evaluations to gather information and use it to improve your performance. And that most always comes out of the middle of the curve. Treating raves and rants alike with equanimity keeps you humble and sane.
Ultimately, I think one's stance toward what others say comes back to the critical element in the first passage from Jobs: trust. If you trust people, then you can train yourself to accept reviews as a source of valuable information. If you don't, then the best you can do is ignore the feedback you receive; the worst is that you'll damage your psyche every time you read them. I'm fortunate to work in a department where I can trust. And, like Jobs, I have a surprising faith in my students' fairness and honesty. It took a few years to develop that trust and, once I did, teaching came to feel much safer.
A few weeks ago, I ran across an article that quoted from published interviews with nine creative people. Over the last couple of days, I have been reading the original interviews that most interested me. Three in particular jogged my mind about creativity, art, and the making of things -- all of which are a part of how I view craft as a programmer and teacher.
Who would have thought that an interview with author Ernest Hemingway in The Paris Review's "The Art of Fiction No. 21" would make me think about computer programming and test-driven development? But it did. When asked about his writing schedule, Hemingway described a morning habit that I myself enjoy:
When I am working on a book or a story I write every morning as soon after first light as possible. There is no one to disturb you and it is cool or cold and you come to your work and warm as you write. You read what you have written and, as you always stop when you know what is going to happen next, you go on from there. You write until you come to a place where you still have your juice and know what will happen next and you stop and try to live through until the next day when you hit it again. You have started at six in the morning, say, and may go on until noon or be through before that. When you stop you are as empty, and at the same time never empty but filling, as when you have made love to someone you love. Nothing can hurt you, nothing can happen, nothing means anything until the next day when you do it again. It is the wait until the next day that is hard to get through.
I love this paragraph. While discussing mundane details of how he starts his writing day, Hemingway seamlessly shifts into a simile comparing writing -- and stopping -- to making love. Writers and other artists can say such things and simply be viewed for what they are: artists. I dare say many computer programmers feel exactly the same about writing programs. Many times I have experienced the strange coincident feelings of emptiness and fullness after a long day or night coding, and the longing to begin again tomorrow. Yet I knew, as Hemingway did, that the breaks were a necessary part of the discipline one needed to write well and consistently over the long haul. (Think sustainable pace, my friends.)
Yet, if one of us programmers were to say what Hemingway said above, to compare the feeling we have when we stop programming to the the feeling we have after making love to a person we love, most people would have to fight back a smirk and suppress an urge to joke about nerds never having sex and not being able to get girls (or guys). The impolite among them would say it out loud. If you are careful in choosing your friends and perhaps a bit lucky, you will surround yourself with friends who react to you saying this with a sympathetic nod, because they know that you, too, are a writer, and something of an artist.
On a more practical note, the writing habit Hemingway describes resembles a habit many of us programmers have. In the world of TDD, you will often hear people say, "Stop at the end of the day with a failing test." My friend, poet and programmer Richard Gabriel, has spoken of ending the day in the middle of a line of code. Both ideas echo Hemingway's advice, because they leave us in the same great position the next morning: ready to start the day by doing something obvious, something concrete.
But why is that so important?
But are there times when the inspiration isn't there at all?
Naturally. But if you stopped when you knew what would happen next, you can go on. As long as you can start, you are all right. The juice will come.
Writing is hard. Starting is hard. But if you are a writer, you must write, you must start. Likewise a programmer. As many people will tell you, inspiration is a fickle and overrated gift. Hemingway speaks elsewhere in the interview of days filled with inspiration, but they are rare. The writer writes regardless of inspiration. In writing, one often creates the very inspiration he seeks.
As long as you can start, you are all right.
Later in the piece, Hemingway has something to interesting to say about a different sort of starting: starting a career. When asked if financial security can be a detriment to good writing, he says:
If it came early enough and you loved life as much as you loved your work it would take much character to resist the temptations. Once writing has become your major vice and greatest pleasure only death can stop it. Financial security then is a great help as it keeps you from worrying. Worry destroys the ability to write. [Worry] attacks your subconscious and destroys your reserves.
This made me think of young CS grads in start-up companies, working to get by on a minimal budget while fulfilling a passion to make something. Being poor may not be as good for our souls as some would have us think, but it does inoculate us from temptations available to us only if we have resources. Once programming is your habit -- "your major vice and greatest pleasure" -- then you are on the path for a productive life as a programmer. If financial success comes too early, or if you are born with resources, you can still become a programmer, but you may have too battle the attraction of things that will get in the way of the work necessary to develop your craft.
This is one of the motives behind the grueling 6-year trial to which we subject new profs to in our universities: to instill habits of work and thought before they receive the temptation-heavy mantle of tenure. Unlike Hemingway's prescription, though, at most research schools the tenure-track phase usually includes an unhealthy dose of uncertainty and worry. But then again, maybe being a poor, struggling young writer or artist does, too.
That's one reason I like Hemingway's answer so much. He does not romanticize being poor. He acknowledges that, once one has the habit of writing, financial security can be a great benefit, because it relieves the writer of the stress that can kill her productivity.
Despite enjoying these insightful passages so much, I cannot say that this a great interview. Hemingway is too often unwilling to talk about elements of the the craft of writing, and he expends too many words telling interviewer George Plimpton -- an intelligent man and accomplished journalist himself -- that his questions are cliche, worn, or stupid. Still, I was driven to read through to the end, and I enjoyed it.
Since the mid-1990s, there has been a healthy conversation around refactoring, the restructuring of code to improve its internal structure without changing its external behavior. Thanks to Martin Fowler, we have a catalog of techniques for refactoring that help us restructure code safely and reliably. It is a wonderful tool for learners and practitioners alike.
When it comes to writing new code, we are not so lucky. Most of us learn to program by learning to write new code, yet we rarely learn techniques for adding code to a program in a way that is as safe and reliable as effective as the refactorings we know and love.
You might think that adding code would be relatively simple, at least compared to restructuring a large, interconnected web of components. But how can we move with the same confidence when adding code as we do when we follow a meticulous refactoring recipe under the protection of good unit tests permits? Test-driven design is a help, but I have never felt like I had the same sort of support writing new code as when I refactor.
So I was quite happy a couple of months ago to run across J.B. Rainsberger's Adding Behavior with Confidence. Very, very nice! I only wish I had read it a couple of months ago when I first saw the link. Don't make the same mistake; read it now.
Rainsberger gives a four-step process that works well for him:
- Identify an assumption that the new behavior needs to break.
- Find the code that implements that assumption.
- Extract that code into a method whose name represents the generalisation you're about to make.
- Enhance the extracted method to include the generalisation.
I was first drawn to the idea that a key step in adding new behavior is to make a new method, procedure, or function. This is one of the basic skills of computer programming. It is one of the earliest topics covered in many CS1 courses, and it should be taught sooner in many others.
Even still, most beginners seem to fear creating new methods. Even more advanced students will regress a bit when learning a new language, especially one that works differently than the languages they know well. A function call introduces a new point of failure: parameter passing. When worried about succeeding, students generally try to minimize the number of potential points of failure.
Notice, though, that Rainsberger starts not with a brand new method, empty except for the new code to be written. This technique asks us first to factor out existing code into a new method. This breaks the job of writing the new code into two, smaller steps: First refactor, relying on a well-known technique and the existing tests to provide safety. Second, add the new code. (These are Steps 3 and 4 in Rainsberger's technique.)
That isn't what really grabbed my attention first, however. The real beauty for me is that extracting a method forces us to give it us a name. I think that naming gives us great power, and not just in programming. A lot of times, CS textbooks make a deal about procedures as a form of abstraction, and they are. But that often feels so abstract... For programmers, especially beginners, we might better focus on the fact that help us to name things in our programs. Names, we get.
By naming a procedure that contains a few lines of code, we get to say what the code does. Even the best factored code that uses good variable names tends to say how something is done, not what it is doing. Creating and calling a method separates the two: the client does what the method does, and the server implements how it is done. This separation gives us new power: to refactor the code in other ways, certainly. Rainsberger reminds us that it also gives us power to add code more reliably!
"How can I add code to a program? Write a new function." This is an unsurprising, unhelpful answer most of the time, especially for novices who just see this as begging the question. "Okay, but what do I do then?" Rainsberger makes it a helpful answer, if a bit surprising. But he also puts it in a context with more support, what to do before we start writing the new code.
Creating and naming procedures was the strongest take-home point for me when I first read this article. As the ideas steeped in my mind for a few days, I began to have a greater appreciation for Rainsberger's focus on assumptions. Novice thinkers have trouble with assumptions. This is true whether they are learning to program, learning to read and analyze literature, or learning to understand and argue public policy issues. They have a hard time seeing assumptions, both the ones they make and the ones made by other writers. When the assumptions are pointed out, they are often unsure what to do with them, and are tempted to skip right over them. Assumptions are easy to ignore sometimes, because they are implicit and thus easy to lose track of when deep in a argument.
Learning to understand and reason about assumptions is another important step on the way to mature thinking. In CS courses, we often introduce the idea of preconditions and postconditions in Data Structures. (Students also see them in a discrete structures course, but there they tend to be presented as mathematical tools. Many students dismiss their value out of hand). Writing pre- and postconditions for a method is a way to make assumptions in your program explicit. Unfortunately, most beginning don't yet see the value in writing them. They feel like an extra, unnecessary step in a process dominated by the uncertainty they feel about their code. Assuring them that these invariants help is usually like pushing a rock up a hill. Tomorrow, you get to do it again.
One thing I like about Rainsberger's article is that it puts assumptions into the context of a larger process aimed at helping us write code more safely. Mathematical reasoning about code does that, too, but again, students often see it as something apart from the task of programming. Rainsberger's approach is undeniably about code. This technique may encourage programmers to begin thinking about assumptions sooner, more often, and more seriously.
As I said, I haven't seen many articles or books talk about adding code to a program in quite this way. Back in January, "Uncle Bob" Martin wrote an article in the same spirit as this, called The Transformation Priority Premise. It offers a grander vision, a speculative framework for all additions to code. If you know Uncle Bob's teachings about TDD, this article will seem familiar; it fits quite nicely with the mentality he encourages when using tests to drive the growth of a program. While his article is more speculative, it seems worthy or more thought. It encourages the tiniest of steps as each new test provokes new code in our program. Unfortunately, it takes such small steps that I fear I'd have trouble getting my students, especially the novices, to give it a fair try. I have a hard enough time getting most students to grok the value of TDD, even my seniors!
I have similar concerns about Rainsberger's technique, but his pragmatism and unabashed focus on code gibes me hope that it may be useful teaching students how to add functionality to their programs.
I ran across an ad today for a part-time position to help promote a local private school. The ad ended, "Marketing degree required."
As surprising as it may seem for a university professor, my first thought was, "Why require a university degree? What you need is a person with interesting ideas and a plan."
I quickly realized that I was being a little rash. There is certainly value in holding a degree, for knowledge and practices learned while completing it. Maybe I was being unfair to marketing in particular? Having studied accounting as well as CS as an undergrad, I took a little marketing, too, and wasn't all that impressed. But that is surely the assessment of someone who is mostly ignorant about the deep content of that discipline. (I do, however, know enough to think that the biggest part of a marketing degree ought to be based in psychology, both of individuals and groups.)
Would I have reacted the same way if the ad had said, "Computer Science degree required"? Actually, I think so. Over the last few years, I have grown increasingly cynical about the almost unthinking use of university degrees as credentials.
As department head and as teacher, I frequently run into non-traditional students who are coming back from the tech industry to earn the CS degrees they don't have but need to advance in their careers, or even to switch jobs or companies in lateral moves. They are far more experienced than our new graduates, and often more ready for any job than our traditional students. A degree can, in fact, be useful for most of these students. They are missing the CS theory that lies at the foundation of the discipline, and often they lack the overarching perspective that ties it all together. An undergrad degree in CS can give them that and help them appreciate the knowledge they already have even more.
But should they really need an undergraduate degree to get a programming job after 10, 15, even 25 years in the field? In the end, it's about what you can do, not what you know. Some of these folks can do plenty -- and know plenty, to boot. They just don't have a degree.
I figure that, all other things being equal, our grads ought to have a big advantage when applying for a job in our field, whether the job requires a degree or not. Unless a job demands a skill we can't or won't give them, say, x years experience in a particular industry language, then our grads should be ready to tackle almost any job in the industry with a solid background and an ability to learn new languages, skills, and practices quickly and well.
If not, then we in the university are doing something wrong.
Or, The Best Foresight Comes from a Good Model
In my previous entry, I mentioned re-reading Asimov's Foundation Trilogy and made a passing joke about psychohistory being a great computational challenge. I've never heard a computer scientist mention psychohistory as a primary reason for getting involved with computers and programming. Most of us were lucky to see so many wonderful and more approachable problems to solve with a program that we didn't need to be motivated by fiction, however motivating it might be.
I have, though, heard and read several economists mention that they were inspired to study economics by the ideas of psychohistory. The usual reason for the connection is that econ is the closest thing to psychohistory in modern academia. Trying to model the behavior of large groups of people, and reaping the advantages of grouping for predictability, is a big part of what macroeconomics does. (Asimov himself was most likely motivated in creating psychohistory by physics, which excels at predicting the behavior of masses of atoms over predicting the behavior of individual atoms.)
As you can tell from recent history, economists are no where near the ability to do what Hari Seldon did in Foundation, but then Seldon did his work more than 10,000 years in the future. Maybe 10,00 years from now economists will succeed as much and as well. Like my economist friends, I too am intrigued by economics, which also shares some important features in common with computer science, in particular a concern with the trade-offs among limited resources and the limits of rational behavior.
The preface to the third book in Asimov's trilogy, Second Foundation, includes a passage that caught my eye on this reading:
He foresaw (or he solved his [system's] equations and interpreted its symbols, which amounts to the same thing)...
I could not help but be struck by how this one sentence captured so well the way science empowers us and changes the intellectual world in which we live. Before the rapid growth of science and broadening of science education, the notion of foresight was limited to personal experience and human beings' limited ability to process that experience and generalize accurately. When someone had an insight, the primary way to convince others was to tell a good story. Foresight could be feigned and sold through stories that sounded good. With science, we have a more reliable way to assess the stories we are told, and a higher standard to which we can hold the stories we are told.
(We don't always do well enough in using science to make us better listeners, or better judges of purported foresights. Almost all of us can do better, both in professional settings and personal life.)
As a young student, I was drawn to artificial intelligence as the big problem to solve. Like economics, it runs directly into problems of limited resources and limited rationality. Like Asimov's quote above, it runs directly into the relationship between foresight and accurate models of the world. During my first few years teaching AI, I was often surprised by how fiercely my students defended the idea of "intuition", a seemingly magical attribute of men and women forever unattainable by computer programs. It did me little good to try to persuade them that their belief in intuition and "gut instinct" were outside the province of scientific study. Not only didn't they care; that was an integral part of their belief. The best thing I could do was introduce them to some of the techniques used to write AI programs and to show them such programs behaving in a seemingly intelligent manner in a situation that piqued my students' interest -- and maybe opened their minds a bit.
Over the course of teaching those early AI courses, I was eventually able to see one of the fundamental attractions I had to the field. When I wrote an AI program, I was building a model of intelligent behavior, much as Seldon's psychohistory involved building a model of collective human behavior. My inspiration did not come from Asimov, but it was similar in spirit to the inspiration my economist friends' drew from Asimov. I have never been discouraged or deterred by any arguments against the prospect of artificial intelligence, whether my students' faith-based reasons or by purportedly rational arguments such as John Searle's Chinese room argument. I call Searle's argument "purportedly rational" because, as it is usually presented, ultimately it rests on the notion that human wetware -- as a physical medium -- is capable of representing symbols in a way that silicon or other digital means cannot.
I always believed that, given enough time and enough computational power, we could build a model that approximated human intelligence as closely as we desired. I still believe this and enjoy watching (and occasionally participating in) efforts that create more and more intelligent programs. Unlike many, I am undeterred by the slow progress of AI. We are only sixty years into an enterprise that may take a few thousand years. Asimov taught me that much.