Kurt Vonnegut never hesitated to tell his readers the morals of his stories. The frontispiece of his novel Mother Night states its moral upfront:
We are what we pretend to be, so we must be careful about what we pretend to be.
Pretending is a core thread that runs through all of Vonnegut's work. I recognized this as a teenager, and perhaps it is what drew me to his books and stories. As a junior in high school, I wrote my major research paper in English class on the role fantasy played in the lives of Vonnegut's characters. (My teachers usually resisted my efforts to write about authors such as Kafka, Vonnegut, and Asimov, because they weren't part of "the canon". They always relented, eventually, and I got to spend more time thinking about works I loved.)
I first used this sentence about pretending in my eulogy for Vonnegut, which includes a couple of other passages on similar themes. Several of those are from Bokononism, the religion created in his novel Cat's Cradle as a way to help the natives of the island of San Lorenzo endure their otherwise unbearable lives. Bokononism had such an effect on me that I spent part of one summer many years ago transcribing The Books of Bokonon onto the web. (In these more modern times, I share Bokonon's wisdom via Twitter.)
Pretending is not just a way to overcome pain and suffering. Even for Vonnegut, play and pretense are the ways we construct the sane, moral, kind world in which we want to live. Pretending is, at its root, a necessary component in how we train our minds and bodies to think and act as we want them to. Over the years, I've written many times on this blog about the formation of habits of mind and body, whether as a computer scientist, a student, or a distance runner.
Many people quote Aristotle as the paradigm of this truth:
We are what we repeatedly do. Excellence, then, is not an act, but a habit.
I like this passage but prefer another of his, which I once quoted in a short piece, What Remains Is What Will Matter:
Excellence is an art won by training and habituation. We do not act rightly because we have virtue or excellence, but rather we have those because we have acted rightly.
This idea came charging into my mind this morning as I read an interview with Seth Godin. He and his interviewers are discussing the steady stream of rejection that most entrepreneurs face, and how some people seem able to fight through it to succeed. What if a person's natural "thermostat" predisposes them to fold in the face of rejection? Godin says:
I think we can reset our inclinations. I'm certain that pretending we can is better than admitting we can't.
Vonnegut and Aristotle would be proud. We are what we pretend to be. If we wish to be virtuous, then we must act rightly. If we wish to be the sort of person who responds to rejection by working harder and succeeding, then we must work harder. We become the person we pretend to be.
As children, we think pretending is about being someone we aren't. And there is great fun in that. As teenagers, sometimes we feel a need to pretend, because we have so little control over our world and even over our changing selves. As adults, we tend to want to put pretending away as child's play. But this obscures a truth that Vonnegut and Aristotle are trying to teach us:
Pretending is just as much about being who we are as about being who we aren't.
As you and I consider the coming of a new year and what we might resolve to do better or differently in the coming twelve months that will make a real difference in our lives, I suggest we take a page out of Vonnegut's playbook.
Think about the kind of world you want to live in, then live as if it exists.
Think about the kind of person you want to be, then live as if you are that person.
I occasionally read and hear people give advice about how to find a career, vocation, or avocation that someone will enjoy and succeed in. There is a lot of talk about passion, which is understandable. Surely, we will enjoy things we are passionate about, and perhaps then we want to put in the hours required to succeed. Still, "finding your passion" seems a little abstract, especially for someone who is struggling to find one.
This weekend, I read A Man, A Ball, A Hoop, A Bench (and an Alleged Thread)... Teller!. It's a story about the magician Teller, one half of the wonderful team Penn & Teller, and his years-long pursuit of a particular illusion. While discussing his work habits, Teller said something deceptively simple:
I love the stuff you never see.
I knew immediately just what he meant.
I can say this about teaching. I love the hours spent creating examples, writing sample code, improving it, writing and rewriting lecture notes, and creating and solving homework assignments. When a course doesn't go as I had planned, I like figuring out why and trying to fix it. Students see the finished product, not the hours spent creating it. I enjoy both.
I don't necessarily enjoy all of the behind-the-scenes work. I don't really enjoy grading. But my enjoyment of the preparation and my enjoyment of the class itself -- the teaching equivalent of "the performance" -- carries me through.
I can also say the same thing about programming. I love to fiddle with source code, organizing and rewriting it until it's all just so. I love to factor out repetition and discover abstractions. I enjoy tweaking interfaces, both the interfaces inside my code and the interfaces my code's users see. I love that sudden moment of pleasure when a program runs for the first time. Users see the finished product, not the hours spent creating it. I enjoy both.
Again, I don't necessarily enjoy everything that I have to do the behind the scenes. I don't enjoy twiddling with configuration files, especially at the interface to the OS. Unlike many of my friends, I don't always enjoy installing and uninstalling, all the libraries I need to make everything work in the current version of the OS and interpreter. But that time seems small compared the time I spend living inside the code, and that carries me through.
In many ways, I think that Teller's simple declaration is a much better predictor of what you will enjoy in a career or avocation than other, fancier advice you'll receive. If you love the stuff other folks never see, you are probably doing the right thing for you.
Stanley Fish wrote this week about the end of a course he taught this semester, on "law, liberalism and religion". In this course, his students a number of essays and articles outside the usual legal literature, including works by Locke, Rawls, Hobbes, Kant, and Rorty. Fish uses this essay to respond to recent criticisms that law schools teach too many courses like this, which are not helpful to most students, who will, by and large, graduate to practice the law.
Most anyone who teaches in a university hears criticisms of this sort now and then. When you teach computer science, you hear them frequently. Most of our students graduate and enter practice the software development. How useful are the theory of computation and the principles of programming languages? Teach 'em Java Enterprise Edition and Eclipse and XSLT and Rails.
My recent entry Impractical Programming, With Benefits starts from the same basic premise that Fish starts from: There is more to know about the tools and methodologies we use in practice than meets the eye. Understanding why something is as it is, and knowing that something could be better, are valuable parts of a professional's preparation for the world.
Fish talks about these values in terms of the "purposive" nature of the enterprise in which we practice. You want to be able to thing about the bigger picture, because that determines where you are going and why you are going there. I like his connection to Searle's speech acts and how they help us to see how the story we tell gives rise to the meaning of the details in the story. He uses football as his example, but he could have used computer science.
He sums up his argument in this way
That understanding is what law schools offer (among other things). Law schools ask and answer the question, "What's the game here?"; the ins and outs of the game you learn later, as in any profession. The complaint ... is that law firms must teach their new hires tricks of the trade they never learned in their contracts, torts and (God forbid) jurisprudence classes. But learning the tricks would not amount to much and might well be impossible for someone who did not know -- in a deep sense of know -- what the trade is and why it is important to practice it.
Such a deep understanding is even more important in a discipline like computing, because our practices evolve at a much faster rate than legal practices. Our tools change even more frequently. When we taught functional programming ten or fifteen years ago, many of our students simply humored me. This wasn't going to help them with Windows programming, but, hey, they'd learn it for our sake. Now they live in a world where Scala, Clojure, and F# are in the vanguard. I hope what they learned in our Programming Languages course has helped them cope with the change. Some of them are even leading the charge.
The practical test of whether my Programming Languages students learned anything useful this semester will come not next year, but ten or fifteen years down the road. And, as I said in the Impractical Programming piece, a little whimsy can be fun in its own right, easy while it stretches your brain.
A couple of weeks ago, I read this article about the syllabi that late author David Foster Wallace wrote for his intro lit courses. This weekend, I finally got to read the syllabi themselves. I've generally found Wallace's long fiction ponderous, but I enjoyed reading him at the scale of a syllabus.
He sounds like a good teacher. This passage, from pages 3-4 of syllabus, is an awesome encouragement and defense of asking questions in class:
Anybody gets to ask any question about any fiction-related issues she wants. No question about literature is stupid. You are forbidden to keep yourself from asking a question or making a comment because you fear it will sound obvious our unsophisticated or lame or stupid. Because critical reading and prose fiction are such hard, weird things to try to study, a stupid-seeming comment or question can end up being valuable or even profound. I am deadly serious about creating a classroom environment where everyone feels free to ask or speak about anything she wishes. So any student who groans, smirks, mimes machine-gunning or onanism, eye-rolls, or in any way ridicules some other student's in-class question/comment will be warned once in private and on the second offense will be kicked out of class and flunked, no matter what week it is. If the offender is male, I am also apt to find him off-campus and beat him up.
Perhaps this stands out in greater relief to me as this semester's Programming Languages course winds down. We did not have a problem with students shutting down other students' desire to comment and inquire, at least not that I noticed. My problem was getting students to ask questions at all. Some groups of students take care of themselves in this regard; others need encouragement.
I didn't react quickly enough this semester to recognize this as a too-quiet group and to do more to get them to open up. The real problem only became apparent to me at about the 75% mark of the course. It has been driven home further over the last couple of weeks, as a few students have begun to ask questions in preparation for the final. Some of their misconceptions run deep, and we would have all been better off to uncover and address them long ago. I'll be more careful next time.
The above paragraph sets a high standard, one I'm not sure I have the energy or acumen to deliver. Encouragement and policies like this create a huge burden on instructor, who must to walk a very tough walk. Promises made and unkept are usually worse than promises never made at all. This is especially true when trust they seek to develop involve the fears and personal integrity of student. With promises like these, the professor's personal integrity is on the line, too.
Still, I can aspire to do more. Even if I don't reach Wallace's level, perhaps I can make my course enough better that students will achieve more.
(And I love reading a syllabus that makes me look up the definition of a word. Those literature professors...)
I've been reading a lot about Daniel Kahneman's new book, Thinking, Fast and Slow. One of the themes of the book is how our brains include two independent systems for organizing and accessing knowledge. System One is incredibly fast but occasionally (perhaps often) wrong. It likely developed early in our biological history and provided humans with an adaptive advantage in a dangerous world. System Two developed later, after humans had survived to create more protective surroundings. It is slow -- conscious, deliberative -- and more often right.
One reviewer summarized the adaptive value of System One in this way:
In the world of the jungle, it is safer to be wrong and quick than to be right and slow.
This phrase reminded me of an old post by Allan Kelly, on the topic of gathering requirements for software. The entry's title is also its punch line:
You are better off being generally right than precisely wrong.
These two quotes are quite different in important ways. Yet they are related in some interesting ways, too.
It is easier to be fast and generally right than to be fast and precisely right. The pattern-matching mechanism in our brains and the heuristics we use consciously are fast, but they are often imprecise. If generally right is good enough, then fast is possible.
Attempts to be slow and precisely right often end up being slow and precisely wrong. Sometimes, the world changes while we are thinking. Other times, we end up solving the wrong problem because we didn't understand our goals or the conditions of the world as well we thought we did at the outset.
Evolution has given us two mechanisms with radically different trade-offs and, it turns out, a biological bias toward quick and wrong.
When I talk with friends who dislike or don't understand agile approaches, I find that they often think that agile folks overemphasize the use of System One in software development. Why react, be wrong, and learn from the mistake, when we could just think ahead and do it right the first time?
In one way, they are right. Some proponents of agile approaches speak rather loosely about Big Design Up Front and about You Aren't Gonna Need It. They leave the impression that one can program without thinking, so long as one takes small enough steps and learns from feedback. They also leave the impression that everyone should work this way, in all contexts. Neither of these impressions is accurate.
I try to help my skeptical friends to understand how a "quick and (sometimes) wrong" mindset can be useful for me even in contexts where I could conceivably plan ahead well and farther. I try to help them understand that I really am thinking all the time I'm working, but that I treat any products of thought that are not yet in code as contingent, awaiting the support of evidence gained through running code.
And then I let them work in whatever way makes them successful and comfortable.
I think that being aware of the presence of Systems One and Two, and the fundamental trade-off between them, can help agile developers work better. Making conscious, well-founded decisions about how far to think ahead, about what and how much to test, and about when and how often to refactor are, in the end, choices about which part of our brain to use at any given moment. Context matters. Experience matters. Blindly working in a quick-and-generally-right way is no more productive approach for most of us than working in a slow-and-sometimes-precisely-wrong way.
I remain endlessly fascinated with the evolution of the news industry in the Internet Age, and especially with the discussions of same within the industry itself. Last week, Clay Shirky posted Institutions, Confidence, and the News Crisis in response to Dean Starkman's essay in the Columbia Journalism Review, Confidence Game. It's clear that not everyone views the change enabled by the internet and the web as a good thing.
Of course, my interest in journalism quickly spills over into my interest in the future of my own institution, the university. In Revolution Out There -- And Maybe In Here, I first began to draw out the similarities between the media and the university, and since then I've written occasionally about connections [ 1 | 2 | 3 ]. Some readers have questioned the analogy, because universities aren't media outlets. But in several interesting ways, they are. Professors write textbooks, lectures, and supporting materials. Among its many purposes, a university course disseminates knowledge. Faculty can object that a course does more than that, which is true, but from many peoples' perspectives -- many students, parents, and state legislators included -- dissemination is its essential purpose.
Universities aren't solely about teaching courses. They also create knowledge, through basic and applied research, and through packaging existing work in new and more useful ways. But journalists also create and package knowledge in similar ways, through research, analysis, and writing. Indeed, one of the strongest arguments by journalism traditionalists like Starkman is that new models of journalism often make little or no account of public-interest reporting and the knowledge creation function of media institutions.
Most recently, I wrote about the possible death of bundling in university education, which I think is where the strongest similarity between the two industries lies. The biggest problems in the journalism aren't with what they do but with the way in which they bundle, sell, and pay for what they do. This is also the weak link in the armor of the university. For a hundred years, we have bundled several different functions into a whole that was paid for by the public through its governments and through peoples' willingness to pay tuition. As more and more options become available to people, the people holding the purses are beginning to ask questions about the direct and indirect value they receive.
We in the universities can complain all we want about the Khan Academy and the University of Phoenix and how what we do is superior. But we aren't the only people who get to create future. In the software development world, there has long been interest in apprenticeship models and other ways to prepare new developers that bypass the university. It's the software world's form of homeschooling.
(Even university professors are beginning to write about the weakness of our existing model. Check out Bryan Caplan's The Magic of Education for a discussion of education as being more about signaling than instruction.)
I look at my colleagues in industry who make a good living as teachers: as consultants to companies, as the authors of influential books and blogs, and as conference speakers. They are much like freelance journalists. We are even starting to see university instructors who want to teach focus on teaching leave higher education and move out into the world of consultants and freelance developers of courses and instructional material. Professors may not be able to start their own universities yet, the way doctors and lawyers can set up their own practices, but the flat world of the web gives them so many more options. As Shirky says of the journalism world, we need experiments like this to help us create the future.
In the journalism world, there is a divide between journalists arguing that we need existing media institutions to preserve the higher goals of journalism and journalists arguing that new models are arising naturally out of new technologies. Sometimes, the first group sounds like it is arguing for the preservation of institutions for their own sake, and the latter group sounds like it is rooting for existing institutions to fall, whatever the price. We in the university need to be mindful that institutions are not the same as their purpose. We have enough lead time to prepare ourselves for an evolution I think is inevitable, but only if we think hard and experiment ourselves.
I linked to Jacob Harris's recent In Praise of Impractical Programming in my previous entry, in the context of programming's integral role in the modern newsroom. But as the title of his article indicates, it's not really about the gritty details of publishing a modern web site. Harris rhapsodizes about wizards and the magic of programming, and about a language that is for many of my colleagues the poster child for impractical programming languages, Scheme.
If you clicked through to the article but stopped reading when you ran into MIT 6.001, you might want to go back and finish reading. It is a story of how one programmer looks back on college courses that seemed impractical at the time but that, in hindsight, made him a better programmer.
There is a tension in any undergraduate CS program between the skills and languages of today and big ideas that will last. Students naturally tend to prefer the former, as they are relevant now. Many professors -- though not all -- prefer academic concepts and historical languages. I encounter this tension every time I teach Programming Languages and think, should I continue to use Scheme as the course's primary language?
As recently as the 1990s, this question didn't amount to much. There weren't any functional programming languages at the forefront of industry, and languages such as C++, Java, and Ada didn't offer the course much.
But now there are Scala and Clojure and F#, all languages in play in industry, not too mention several "pretty good Lisps". Wouldn't my students benefit from the extensive libraries of these languages? Their web-readiness? The communities connecting the languages to Hadoop and databases and data analytics?
I seriously consider these questions each time I prep the course, but I keep returning to Scheme. Ironically, one reason is precisely that it doesn't have all those things. As Harris learned,
Because Scheme's core syntax is remarkably impoverished, the student is constantly pulling herself up by her bootstraps, building more advanced structures off simpler constructs.
In a course on the principles of programming languages, small is a virtue. We have to build most of what we want to talk about. And there is nothing quite so stark as looking at half a page of code and realizing, "OMG, that's what object-oriented programming is!", or "You mean that's all a variable reference is?" Strange as it may sound, the best way to learn deeply the big concepts of language may be to look at the smallest atoms you can find -- or build them yourself.
Harris "won't argue that "journalism schools should squander ... dearly-won computer-science credits on whimsical introductions to programming" such as this. I won't even argue that we in CS spend too many of our limited credits on whimsy. But we shouldn't renounce our magic altogether, either, for perfectly practical reasons of learning.
And let's not downplay too much the joy of whimsy itself. Students have their entire careers to muck around in a swamp of XML and REST and Linux device drivers, if that's what they desire. There's something pretty cool about watching Dr. Racket spew, in a matter of a second or two, twenty-five lines of digits as the value of a trivial computation.
As Harris says,
... if you want to advance as a programmer, you need to take some impractical detours.
He closes with a few suggestions, none of which lapse into the sort of navel-gazing and academic irrelevance that articles like this one sometimes do. They all come down to having fun and growing along the way. I second his advice.
Jacob Harris begins his recent In Praise of Impractical Programming with a short discussion of how programming is becoming an integral part of the newsroom:
For the past few years, I've been working as a software developer in the newsroom, where perceptions of my kind have changed from novelty to a necessity. Recognizing this, some journalism schools now even require programming courses to teach students practical skills with databases or web frameworks. It's thrilling to contemplate a generation of web-hacking journalists -- but I wish we could somehow squeeze a little magic into their course load.
This seems like a natural evolutionary path that many industries will follow in the coming years or decades. At first it will be enough to use other people's tools. Then, practitioners will want to be able to write code in a constrained environment, such as a web framework or a database application. Eventually, I suspect that at least a few of the programming practitioners will tire of the constraints, step outside of the box, and write the code -- and maybe even the tools -- they want and need. If historians can do it, so can journalists.