A Teacher Learns from Coaches -- Run to the Roar
For what is genius, I ask you,
but the capacity to be obsessed?
One thing about recovering from knee surgery: it gives you lots of time to read. In between bouts of pain, therapy, and sleep, I have been reading newspapers, magazines, and a few books lying around the house, including the rest of a Dave Barry book and the excellent but flawed law school memoir One L. Right now I am enjoying immensely re-reading Isaac Asimov's Foundation trilogy. (Psychohistory -- what a great computational challenge!)
The most unusual book I've read thus far has been Run to the Roar. This is not great literature, but it is an enjoyable read. It draws its power to attract readers from perhaps the most remarkable sports streak in history at any level: the men's squash team at Trinity College has won thirteen consecutive national titles and has not lost a single match during the streak. The thread that ties the book together as a story is the tale of the eleventh national championship match, in 2009 against Princeton University. This match is considered by many to be the greatest collegiate squash match of all time, won 5-4 by Trinity in see-saw fashion. Six of the nine matches went the full five games, with three of those requiring comebacks from 0-2 down, and only one match ended in straights. Two great teams, eighteen great players, battling to the last point. In the end, Trinity survived as much as it won.
I didn't know much about squash before the match, though I used to play a little racquetball. Fortunately, the book told me enough about the game and its history that I could begin to appreciate the drama of the match and the impressive nature of the streak. Unbelievable story, really.
But the book is also about its co-author, Trinity coach Paul Assaiante, both his life and his coaching methods. The book's subtitle is "Coaching to Overcome Fear", which captures in one phrase Assaiante's approach as well as any could. He works to help his players play in the moment, with no thoughts of external concerns weighing on their minds; enjoying the game they love and the privilege they have to test their preparation and efforts on the court in battle.
Assaiante views himself as a teacher, which makes what he says and the way he says it interesting to the teacher in me. There were many passages that struck a chord with me, whether as an "I have that pattern" moment or as an idea that I might try in my classroom. In the end, I saved two passages for more thought.
The first is the passage that leads off this entry. Assaiante attributed it to Steven Millhauser. I had never heard the quote, so I googled it. I learned that Millhauser is an award-winning author. Most hits took me pages with the quote as the start of a longer passage:
For what is genius, I ask you, but the capacity to be obsessed? Every normal child has that capacity; we have all been geniuses, you and I; but sooner or later it is beaten out of us, the glory fades, and by the age of seven most of us are nothing but wretched little adults.
What a marvelous pair of sentences. It's easy to see why the sentiment means so much to Assaiante. His players are obsessive in their training and their playing. Their coach is obsessive in his preparation and his coaching. (The subtitle of one of the better on-line articles about Trinity's streak begins "Led by an obsessive coach...".)
My favorite story of his coaching obsessiveness was how he strives to make each practice different -- different lengths, different drills, different times of day, different level of intensity, and so on. He talks of spending hours to get each practice ready for the team, ready to achieve a specific goal in the course of a season aimed at the national championship.
Indeed, Assaiante is seemingly obsessive in all parts of his life; the book relates how he conquered several personal and coaching challenges through prolonged, intense efforts to learn and master new domains. One of the sad side stories of Run to the Roar explores whether Assaiante's obsessiveness with coaching squash contributed to the considerable personal problems plaguing his oldest child.
Most really good programmers are obsessive, too -- the positive compulsive, almost uncontrollable drive that sticks with a thorny problem until it is solved, that tracks a pernicious bug until it is squashed. Programming rewards that sort of single-mindedness, elevating it to desirable status.
I see that drive in students. Some have survived the adults and schools that seem almost aimed at killing children's curiosity and obsessiveness. My very best students have maintained their curiosity and obsessiveness and channeled them positively into creative careers and vocations.
The best teachers are obsessive, too. The colleagues I admire most for their ability to lead young minds are some of the most obsessive people I know. They, too, seem to have channeled their obsessiveness well, enabling them to lead well-adjusted lives with happy, well-adjusted spouses and children -- even as they spend hours poring over new APIs, designing and solving new assignments for their students, and studying student code to find the key thing missing from their lectures, and then making those lectures better.
(As an aside, the Millhauser quote comes from his novel, "Edwin Mullhouse: The Life and Death of an American Writer 1943-1954 by Jeffrey Cartwright", a book purportedly written by a seven-year-old. I read a couple of reviews such as this one, and I'm not sure whether I'll give it a read or not. I am certainly intrigued.)
The second passage I saved from Assaiante's book comes from Jack Barnaby, Harvard's legendary squash and tennis coach:
The greatest limitation found in teachers is a tendency for them to teach the game the way they play it. This should be avoided. A new player may be quite differently gifted, and the teacher's personal game may be in many ways inappropriate to the pupil's talents. A good teacher assesses the mental and physical gifts of his pupil and tries to adapt to them. There is no one best way to play the game.
(I think this comes from Barnaby's out-of-print Winning Squash Racquets, but I haven't confirmed it.)
One of the hardest lessons for me to learn as a young teacher was not to expect every student to think, design, or code like me. For years I struggled and probably slowed a lot of my students' learning, as they either failed to adapt to my approach or fought me. Ironically, the ones most likely to succeed in spite of me were the obsessive ones, who managed to figure things out on their own by sheer effort!
Eventually I realized that being more flexible wasn't dumbing down my course but recognizing what Barnaby knew: students may have their own abilities and skills that are quite different from mine. My job is to help them maximize their abilities as best I can, not make them imitate me. Sometimes that means helping them to change, perhaps helping them recognize the need to change, but never simply to force them into the cookie cutter of what works well for me.
Sometimes I envy coaches, who usually work with a small cadre of student-athletes for an entire year, with most or all of them studying under the coach for four years. This gives the coach time to "assess the mental and physical gifts of his pupils and try to adapt to them". I teach courses that range from 10 to 40 students in size, two a year, and my colleagues teach six sections a year. We are lucky to see some students show up multiple times over the course of their time at the university, but it is with only a select few that I have the time and energy to work with individually at that level. I so try to assess the collective gifts, interests, and abilities of each class and adapt how and what I teach them as best as I am able.
In the end, I enjoyed all the threads running through Run to the Roar. I'm still intrigued by the central lesson of learning to "run to the roar", to confront our fears and see how feeble what we fear often turns out to be. I think that a lot of college students are driven by fear more than we realize -- by fear of failing in a tough major, fear of disappointing their parents, fear of not understanding, or appearing unintelligent, or not finding a career that will fulfill and sustain them. I have encountered a few students over the years in whom I came to see the fear holding them back, and on at least some occasions was able to help them face those fears more courageously, or at least I hope so.
Having read this book, I hope this fall to be more sensitive to this potential obstacle to learning and enjoyment in my class, and to be more adaptable in trying to get over, through, or around it.
After two bouts of bad news about my knee -- first the diagnosis and then the ineffectiveness of simpler fixes -- I have received good news, all things considered. A week ago Monday, I underwent a relatively new form of partial knee replacement called makoplasty. The surgeon thought the operation went very well. Good news at last!
Now I am in a second round of recovery and rehabilitation, with therapy much like the first round: lots of non-weight-bearing motion to loosen the joint and to strengthen the muscles in the joint and the rest of the leg. There is certainly more pain than last time, but it's not so bad. I did have an adverse reaction to the medication prescribed for the pain, which has been uncomfortable and slowed my recovery, but I think I have to be happy with where I am and cautiously optimistic about where I can be in a few weeks and months. While that still almost certainly still will not include running, I should be able to return to an active life.
An experience from the surgery reminded me that, while I may be able to become active again, I won't be a youngster anymore. This procedure required spending one night in the hospital, so that they could monitor my vitals and my incision closely for a few hours. On the overnight shift, I had a college-aged nurse's aide who helped me several times. She called me, "Honey". Twice. Each time, I felt twice my age, rather than twice hers.
Still, I look forward to the progress a little hard work can provide to feeling young again.
Last week, I wrote an entry on managers and communication motivated by some of the comments John Lilly, formerly of Mozilla, made in an interview with Fast Company. In the same interview, Lilly mentioned teaching in a couple of ways that made me think about that part of my job.
... what's the difference between leadership and management?
For me, leadership is imagining the world that you want and figuring out how to go make it that way and how to get other people to help you. That happens sort of all up and down the spectrum of people. Teachers do that every day.
There is certainly an element of leadership in how our best teachers reach students. The K-12 education world talks a lot about this sort of thing, but that discussion often seems to lack much connection to the content of the discipline being taught. University educators work at the intersection of instruction and leadership in a way that other teachers don't. To me, this makes college teaching more interesting and sometimes more challenging. As with so many other things, simply recognizing this is a great first step toward doing a better job. In what ways am I as an instructor of, say, programming languages a leader to my students?
One part of the answer surfaces later in the article (emphasis added):
I've been interviewing a lot people for jobs ... lately. I've been struck by how strongly their first job imprints them. People who went to Google out of school have a certain way of talking and thinking about the world, people who went to Amazon, have a different way thinking about it, Facebook a different way. ...
... You just start to see patterns. You say, "Oh, that's an Amazon construct," or "that's totally a Googley way to look at the world." ...
From an organizational leadership point of view, you should think hard about what your organization is imprinting on people. Your company, hopefully, will be huge. But what you imprint on people and the diaspora that comes out of your company later may or may not be an important and lasting legacy.
Most companies probably think about what they do as about themselves; they are creating an organization. The tech start-up world has reminded us just how much cross-pollination there can be in an industry. People start their careers at a new company, learn and grow with the company, and then move on to join other companies or to start new ones. The most successful companies create something of a diaspora.
Universities are all about diaspora. Our whole purpose is to prepare students to move on to careers elsewhere. Our whole purpose is to imprint a way of thinking on students.
At one level, most academics don't really think in this way. I teach computer science. I'm not "imprinting" them; I am helping them learn a set of ideas, skills, and practices. It's all about the content, right?
Of course, it's not quite so simple. We want our graduates to know some things and be able to do some things. The world of CS is large, and in a four-year undergrad program we can expose our students to only a subset of that world. That choice is a part of the imprint we make. And the most important thing we can leave with our students is an approach to thinking and learning that allows them to grow over the course of a long career in a discipline that never sits still. That is a big part of the imprint we make on our students, too.
As members of a computer science faculty, we could think about imprinting as an organization in much the way Lilly discusses. Can someone say, "That's a totally UNI Computer Science way of thinking"? If so, what is that way of thinking? What would we like for it mean? Are our alumni and their employers served well by how we imprint our students?
As a department head, I have a chance to talk to alumni and employers frequently. I usually hear good things, but that's not so surprising. Our department might be able to improve the job we do by thinking explicitly up front about our imprint we hope to have on students, a form of starting at the end and working backwards.
As a teacher, I often think about how students approach problems after they have studied with me. Thinking about the idea of my "imprint" on them is curious. Can someone say, "That's a totally Professor Wallingford way of thinking"? If so, would that be good thing? How so? If so, what does "totally Wallingford way of thinking" mean to my students now? What would I like for it mean?
This line of thinking could be useful to me as I begin to prepare my fall course on programming languages. Without thinking very long, I know that I want to imprint on my students a love for all kinds of languages and an attitude of openness and curiosity toward learning and creating languages. What more?
Clay Shirky's latest piece talks a bit about the death of bundling in journalism, in particular in newspapers. Bundling is the phenomenon of putting different kinds of content into a single product and selling consumers the whole. Local newspapers contain several kinds of content: local news coverage, national news, sports, entertainment, classified ads, obituaries, help columns, comics, .... Most subscribers don't consume all this content and won't pay for it all. In the twentieth century, it worked to bundle it all together, get advertisers to buy space in the package, and get consumers to buy the whole package. The internet and web have changed the game.
As usual, Shirky talking about the state and future of newspapers sets me to thinking about the state and future of universities [ 1, 2, 3, 4 ]. Let me say upfront that I do not subscribe to the anti-university education meme traversing the internet these days, which seems especially popular among software people. Many of its proponents speak too glibly about a world without the academy and traditional university education. Journalism is changing, not disappearing, and I think the same will be true of universities. The questions are, How will universities change? How should they change? Will universities be pulled downstream against there will, or will they actively redefine their mission and methods?
I wonder about the potential death of bundling in university. Making an analogy to Shirky's argument helps us to see some of the dissatisfaction with universities these days. About newspapers, he says:
Writing about the Dallas Cowboys in order to take money from Ford and give it to the guy on the City Desk never made much sense.
It's not hard to construct a parallel assertion about universities:
Teaching accounting courses in order to take money from state legislatures and businesses and give it to the humanities department never made much sense.
Majors that prepare students for specific jobs and careers are like the sports section. They put students in the seats. States and businesses want strong economies, so they are willing to subsidize students' educations, in a variety of ways. Universities use part of the money to support higher-minded educational goals, such as the liberal arts. Everyone is happy.
Well, they were in the 20th century.
The internet and web have drastically cut the cost of sharing information and knowledge. As a result, they have cut the cost of "acquiring" information and knowledge. When the world views the value of the bundle as largely about the acquisition of particular ingredients (sports scores or obituaries; knowledge and job skills), the business model of bundling is undercut, and the people footing most of the bill (advertisers; states and businesses) lose interest.
In both cases, the public good being offered by the bundle is the one most in jeopardy by unbundling. Cheap and easy access to targeted news content means that there is no one on the production side of the equation to subsidize "hard" news coverage for the general public. Cheap and easy access to educational material on-line erodes the university's leverage for subsidizing its public good, the broad education of a well-informed citizenry.
Universities are different from newspapers in one respect that matters to this analogies. Newspapers are largely paid for by advertisers, who have only one motivation for buying ads. Over the past century, public universities have largely been paid for by state governments and thus the general public itself. This funder of first resort has an interest in both the practical goods of the university -- graduates prepared to contribute to the economic well-being of the state -- and the public goods of the university -- graduates prepared to participate effectively in a democracy. Even still, over the last 10-20 years we have seen a steep decline in the amount of support provided by state governments to so-called "state universities", and elected representatives seem to lack the interest or political will to reverse the trend.
Shirky goes on to explain why "[n]ews has to be subsidized, and it has to be cheap, and it has to be free". Public universities have historically had these attributes. Well, few states offer free university education to their citizens, but historically the cost has been low enough that cost was not an impediment to most citizens.
As we enter a world in which information and even instruction are relatively easy to come by on-line, universities must confront the same issues faced by the media: the difference between what people want and what people are willing to pay for; the difference between what the state wants and what the state is willing to pay for. Many still believe in the overarching value of a liberal arts component to university education (I do), but who will pay for it, require it, or even encourage it?
Students at my university have questioned the need to take general education courses since before I arrived here. I've always viewed helping them to understand why as part of the education I help to deliver. The state was paying for most of their education because it had an interest in both their economic development and their civic development. As the adage floating around the Twitter world this week says, "If you aren't paying for the product, you are the product." Students weren't our customers; they are our product.
I still mostly believe that. But now that students and parents are paying the majority of the cost of the education, a percentage that rises every year, it's harder for me to convince them of that. Heck, it's harder for me to convince myself of that.
Shirky says other things about newspapers that are plausible when uttered about our universities as well, such as:
News has to be subsidized because society's truth-tellers can't be supported by what their work would fetch on the open market.
News has to be cheap because cheap is where the opportunity is right now.
And news has to be free, because it has to spread.
Perhaps my favorite analog is this sentence, which harkens back to the idea of sports sections attracting automobile dealers to advertise and thus subsidize the local government beat (emphasis added:
Online, though, the economic and technological rationale for bundling weakens -- no monopoly over local advertising, no daily allotment of space to fill, no one-size-fits-all delivery system. Newspapers, as a sheaf of unrelated content glued together with ads, aren't just being threatened with unprofitability, but incoherence.
It is so very easy to convert that statement into one about our public universities. We are certainly being threatened with unprofitability. Are we also being threatened with incoherence?
Like newspapers, the university is rapidly finding itself in need of a new model. Most places are experimenting, but universities are remarkably conservative institutions when it comes to changing themselves. I look at my own institution, whose budget situation calls for major changes. Yet it has been slow, at times unwilling, to change, for a variety of reasons. Universities that depend more heavily on state funding, such as mine, need to adapt even more quickly to the change in funding model. It is perhaps ironic that, unlike our research-focused sister schools, we take the vast majority of our students from in-state, and our graduates are even more likely to remain in the state, to be its citizens and the engines of its economic progress.
Shirky says that we need the new news environment to be chaotic. Is that true of our universities as well?
Everybody's so different / I haven't changed. -- Joe Walsh
A couple of weeks ago, lots of people were commenting on an interview with John Lilly, the former CEO of Mozilla. Lilly talks about how he had to learn to be a manager, having started his career as an engineer and developer. As a computer scientist who has been working as a department head for six years, I have first-hand experience with the task he faced. I also recognize some of the specific lessons he learned.
For example, when you become a manager, even a technical lead on a team of developers, your relationship with your co-workers changes. I blogged about my first encounters with this change within weeks of becoming head. Later, I began to notice that faculty interpreted things I said differently than I intended them. Lilly had a similar experience:
When the founder says, "Why don't you make button a little more orange?" it somehow has more meaning than an individual contributor saying that. Now, that's not how I meant it. I meant it as an individual contributor.
As you get more and more responsible for the organization and people, you can start projects with throwaway comments.
Several years ago, I experienced this effect in a slightly different form. I asked several faculty members to meet with me about some department issue. I suggested a specific time for the meeting and asked if anyone had conflicts with that time. Someone wrote to say that he had another committee meeting that overlapped the beginning of my proposed meeting. His committee meeting sounded like one of those standing meetings that we all have to attend but don't enjoy, so I asked if he'd be willing to leave it early and come to ours.
He never got back to me but later cc:ed me on a note to the committee in which he told them he would not be able to make that meeting. I saw from the quoted text in the note that the committee discussion could be important to our department. I went down to his office and told him that he should go to his committee meeting, that we could schedule our meeting at a different time. He said that he had interpreted my question as a suggestion to skip the other meeting.
Certainly, I was inartful in asking if he could leave the other meeting early. My inexperience showed through. Even so, I was surprised when he said that he had interpreted my question as an instruction or hint. I remember vividly thinking to myself:
Sometimes, a question is a question.
From that experience, I learned that I needed to be more careful. Sometimes, a question really is just a question, but the listener may not know that. There are all kinds of cases in everyday communication where this is true, harmless and standard speech acts that differ from the words actually used. The risk of a misinterpretation goes up whenever there is an imbalance of power between the participants, whether real or perceived. For better or worse, when I became head, such a gap became part of many conversations I have with my colleagues. I may not have changed, but the circumstances have.
Something similar exists between instructors and students, of course. But as instructor I've always been more aware of the potential problem. That seems easier, because everyone expects there to be an imbalance of power between instructors and students. I wasn't prepared for this to happen to me and the people I had been working with as equals, some for more than a decade. This was tough on me emotionally for a while.
Once I recognized what was going on, I found ways to reduce the risk of misunderstanding. Sometimes, it's as simple as making the intention of a statement or question explicit. Other times, I have to hold a message back and re-think what I intend and how to achieve that goal. In this sense, being an administrator for a few years has probably helped me to communicate more effectively. (I wonder if my wife thinks this, too!)
Lilly learned similar lessons:
Over time, I discovered a couple of things--both aimed at reducing these effects. Number one, I went out of my way to explain the context in which I was making the comment. I'd say, "Look, I'm going to say some things, but this is not the CEO talking or not the founder talking. This is a guy who likes design or this is a guy who uses products. Please understand it in that context."
The second thing is I started noticing my interactions in the hallway. I'm an engineer by background and a bit of an introvert naturally. When I walk between meetings, I think about things. A lot times I'll be looking down my phone or looking down at the floor while I think things through. It's sort of a natural engineer behavior, but it's pretty off-putting if your CEO walks by you and doesn't look up and notice you.
Once we start noticing our interactions and thinking about them from both sides, we can start to better. Software developers are pretty good at analyzing and solving problems. What comes naturally to us isn't always the most effective way to behave when our role changes. But we can study the world more closely and choose to act differently. Changing habits is always a challenge, but with conscious effort and perseverance, we can do it. If if you want to have a larger effect on the world than writing code can do alone, sometimes you must.
For Father's Day, my daughter gave me the most recent book of essays by humorist Dave Barry, subtitled his "Amazing Tales of Adulthood". She thought I'd especially enjoy the chapter on dance recitals, which reports -- with only the slightest exaggeration, I assure you -- an experience shared by nearly every father of daughters these days. He nailed it, right down to not finding your daughter on-stage until her song is ending.
However, his chapter on modern technology expresses a serious concern that most readers of this blog will appreciate:
... it bothers me that I depend on so many things that operate on principles I do not remotely understand, and which might not even be real.
He is talking about all of modern technology, including his microwave oven, but he when he lists tools that baffle him, digital technology leads the way:
I also don't know how my cell phone works, or my TV, or my computer, or my iPod, or my GPS, or my camera that puts nineteen thousand pictures inside a tiny piece of plastic, which is obviously NOT PHYSICALLY POSSIBLE, but there it is.
He knows this is "digital" technology, because...
At some point ... all media -- photographs, TV, movies, music, oven thermometers, pornography, doorbells, etc. -- became "digital". If you ask a technical expert what this means, he or she will answer that the information is, quote, "broken down into ones and zeros." Which sounds good, doesn't it? Ones and zeros! Those are digits, all right!
The problem is, he has never seen the ones and zeros. No matter how closely he looks at his high-def digital television, he can't see any ones or zeros. He goes on to hypothesize that no one really understands digital technology, that this "digital" thing is just a story to dupe users, and that such technology is a serious potential threat to humanity.
Of course, Dave is just having fun, but from 10,00 feet, he is right. Take a random sample of 100 people from this planet, and you'd be lucky to find one person who could explain how an iPod or digital camera works. I know that we don't all have to understand all the details of all our tools, otherwise we would all be in trouble. But this has become a universal, omnipresent phenomenon. Digital computations are the technology of our time. Dave could have listed even more tools that use digital technology, had he wanted (or known). If you want to talk about threats to humanity, let's start talking planes, trains, and automobiles.
For so many people, every phase of life depends on or is dominated by digital computation. Shouldn't people have some inkling of how all this stuff works? This is practical knowledge, much as knowing a little physics is useful for moving around the world. Understanding digital technology can make people better users of their tools and help them dream up improvements.
But to me, this is also humanistic knowledge. Digital technology is a towering intellectual and engineering achievement, of this or any era. It empowers us, but it also stands as a testament to humanity's potential. It reflects us.
Dave talked about a threat lying in wait, and there is one here, though not the one he mentions. We need people who understand digital technology because we need people to create it. Contrary to his personal hypothesis, this stuff isn't sent from outer space to the Chinese to be packaged for sale in America.
After reading this piece, I had two thoughts.
First, I think we could do a lot for Dave's peace of mind if we simply enrolled him in a media computation course! He is more than welcome to attend our next offering here. I'll even find a way for him to take the course for free.
Second, perhaps we could get Dave to do a public service announcement for studying computer science and digital technology. He's a funny guy and might be able to convince a young person to become the next Alan Kay or Fran Allen. He is also the perfect age to appeal to America's legislators and school board members. Perhaps he could convince them to include digital technology as a fundamental part of general K-12 education.
I am pretty sure that I will need your help to make this happen. I am no more capable of convincing Dave Barry to do this than of producing a litter of puppies. (*)
(*) Analogy stolen shamelessly from the same chapter.
Last week I ran into this quote attributed to Einstein:
The formulation of a problem is often more essential than its solution, which may be merely a matter of mathematical or experimental skill.
I don't like "merely" here, because it diminishes the value of "mere" mathematical or experimental skills. I understand why problem formulation is so important, usually more important than problem solution. For one thing, it is hard to solve a problem you have not formulated yet. For another, problem formulation is often more difficult than solving the problem. As a result, as skills go, problem formulation is the scarcer resource.
Maybe if you are Einstein, you can get by without the skills you need merely to solve the problem. If you can discover ideas like relativity, you can probably find grad students to turn the crank. But the rest of us usually need those skills.
(I'm even skeptical about Einstein. I've heard stories, perhaps apocryphal, perhaps exaggerated, about Einstein's lack of mathematical skills. But even if there is a grain of truth in the stories, I suspect that it is all relative. There is a big difference between the math skills one needs to work out the theory of relativity and the math skills one needs to do the kind of work most scientists do day-to-day.)
One important reminder we get from the quote is that there are two skills, problem formulation and problem solution. They are different. We should learn how to do both. They require different kinds of preparation. One can learn many problem-solving skills through practice, practice, practice: repetition trains our minds. Problem formulation skills generally require a more reflection and thinking about. Lots of experience helps, of course, but it's harder to get enough practice to learn how to tame problems through only repetition.
For most of us and most domains, mastering problem-solving skills is a useful, if not necessary, precursor to developing problem formulation skills. While developing our problem-solving skills, we get a lot of repetition with the syntax of semantics of the domain. This volume of experience prepares our brain to work in the domain. It also give our brains -- engines capable of remarkable feats of association -- begins to make connections, despite our own inattention to the bigger picture. Our brains are doing a lot of work while we are "just" solving problems.
Then we need to take that raw material and work on learning how to formulate problems, deliberately. In that, I agree with Einstein.
Yesterday, Michael Feathers tweeted:
If a code base is more complicated than a car, shouldn't it have a maintenance plan too?
I asked him, "Refactoring every 3K miles?", and he joked back, "Well, maybe every 3K lines." I had thought about using LOC in my tweet decided to stick with the auto analogy. Whatever its weaknesses, LOC seems to be the first place programmers' minds go when we talk about the volume of code. (Though, as much traveling as Feathers and other big-time consultants and speakers do in a year, maybe 3000 miles is the right magnitude after all.)
Feathers makes a serious point, even if he didn't mean it too seriously. When we buy a car, we implicitly accept the notion of scheduled maintenance: change the oil every so often; have the engine tuned up every so often; replace the battery and rotate the tires every so often. We accept it because we know that it makes our car run better and last longer.
When we buy software, we want it to run forever, as is. Or the company who sells it to us wants us to run it as-is forever -- or buy a new version. Imagine having to buy a new car as soon as your current car started coughing, wheezing, or seizing up on dirty oil.
I mentioned refactoring in my joke because it is part of the maintenance plan built in to XP and used in so many agile approaches to software development. XP discourages long-form maintenance in the form of refactoring every few months or even weeks. Instead, it encourages a sort of continuous maintenance, in a tight test-code-refactor cycle. It's kind of like checking your car's oil, fluids, tires, etc., after every use.
When we do that to a car, it's usually because the car is in bad shape, breaking down as we try to extend its life. But continuous refactoring of a code base is usually a sign of robust health. It means that we know our code is in good shape and ready for use -- and extension. Teams that maintain their code on automobile-like time scales are usually sitting on a time bomb. Users may be able to use the code, but the programmers dare not touch its internals.
My other thought as I tweeted was about our inability, or perhaps unwillingness, to make this idea come alive for the students in our university CS programs.
It is an inability because it is so hard to create ways for students to live with any body of code longer than a semester or two, and time is a necessary ingredient facing the need for maintenance. Of all my project courses, compilers seems the most frequent teacher of this lesson. A semester may not be long, but a compiler is complex enough, and a non-trivial language spec hard enough to understand, to accelerate the sense of age and deterioration.
It is perhaps an unwillingness because most every CS faculty I know makes very little effort to change courses and degree programs to make this lesson approachable. The good news is that making changes to bring this idea within our students' learning horizons also brings a lot of other important software development lessons within their horizons.
I am a huge tennis fan. This morning, I watched the men's final at Wimbledon and, as much as I admire Roger Federer and Raphael Nadal for the games and attitudes, I really enjoyed seeing Novak Djokovic break through for his first title at the All-England Club. Djokovic has been the #3 ranked player in the world for the last four years, but in 2011 he has dominated, winning 48 of 49 matches and two Grand Slam titles.
After the match, commentator and former Wimbledon champion John McEnroe asked Djokovic what he had changed about his game to become number one. What was different between this year and last? Djokovic shrugged his shoulders, almost imperceptibly, and gave an important answer:
A few percent improvement in several areas of my game.
The difference for him was not an addition to his repertoire, a brand new skill he could brandish against Nadal or Federer. It was a few percentage points' improvement in his serve, in his return, in his volley, and in his ability to concentrate. Keep in mind that he was already the best returner of service in the world and strong enough in the other elements of his game to compete with and occasionally defeat two of the greatest players in history.
That was not enough. So he went home and got a little better in several parts of his game.
Indeed, the thing that stood out to me from his win this morning against Rafa was the steadiness of his baseline play. His ground strokes were flat and powerful, as they long have been, but this time he simply hit more balls back. He made fewer errors in the most basic part of the game, striking the ball, which put Nadal under constant pressure to do the same. Instead of making mistakes, Djokovic gave his opponent more opportunities to make mistakes. This must have seemed especially strange to Nadal, because this is one of the ways in which he has dominated the tennis world for the last few years.
I think Djokovic's answer is so important because it reminds us that learning and improving our skills are often about little things. We usually recognize that getting better requires working hard, but I think we sometimes romanticize getting better as being about qualitative changes in our skill set. "Learn a new language, or a new paradigm, and change how you see the world." But as we get better this becomes harder and harder to do. Is there any one new skill that will push Federer, Nadal, or Djokovic past his challengers? They have been playing and learning and excelling for two decades each; there aren't many surprises left. At such a high level of performance, it really does come down to a few percent improvement in each area of the game that make the difference.
Even for us mortals, whether playing tennis or writing computer programs, the real challenge -- and the hardest work -- often lies in making incremental improvements to our skills. In practicing the cross-court volley or the Extract Class refactoring thousands and thousands of times. In learning to concentrate a little more consistently when we tire by trying to concentrate a little more consistently over and over.
As Nadal said in his own post-game inteview, the game is pretty simple. The challenge is to work hard and learn how to play it better.
Congratulations to Novak Djokovic for his hard work at getting a few percent better in several areas of his game. He has earned the accolade of being, for now, the best tennis player in the world.
Yesterday, I read Esther Derby's recent post, Promoting Double Loop Learning in Retrospectives, which discusses ways to improve the value of our project retrospectives. Many people who don't do project retrospectives will still find Derby's article useful, because it's really about examining how we think and expanding possibilities.
One of the questions she uses to jump start deeper reflection is:
What would have to be true for [a particular practice] to work?
This is indeed a good question to ask when we are trying to make qualitative changes in our workplaces and organizations, for the reasons Derby explains. But it is also useful more generally as a communication tool.
I have a bad personal habit. When someone says something that doesn't immediately make sense to me, my first thought is sometimes, "That doesn't make sense." (Notice the two words I dropped...) Even worse, I sometimes say it out loud. That doesn't usually go over very well with the person I'm talking to.
Sometime back in the '90s, I read in a book about personal communication about a technique for overcoming this disrespectful tendency, which reflects a default attitude. The technique is to train yourself to think a different first thought:
What would have to be true in order for that statement to be true?
Rather than assume that what the person says is false, assume that it is true and figure out how it could be true. This accords my partner the respect he or she deserves and causes me to think about the world outside my own point of view. What I found in practice, whether with my wife or with a professional colleague, was that what they had said was true -- from their perspective. Sometimes we were starting from different sets of assumptions. Sometimes we perceived the world differently. Sometimes I was wrong! By pausing before reacting and going on the defensive (or, worse, the offensive), I found that I was saving myself from looking silly, rash, or mean.
And yes, sometimes, my partner was wrong. But now my focus was not on proving his wrong but on addressing the underlying cause of his misconception. That led to a very different sort of conversation.
So, this technique is not an exercise in fantasy. It is an exercise in more accurate perception. Sometimes, what would have to be true in the world actually is true. I just hadn't noticed. In other cases, what would have to be true in the world is how the other person perceives the world. This is an immensely useful thing to know, and it helps me to respond both more respectfully and more effectively. Rather than try to prove the statement false in some clinical way, I am better served by taking one of two paths:
I am still not very good at this, and occasionally I slip back into old habits. But the technique has helped me to be a better husband as well as a better colleague, department head, and teacher.
Speaking as a teacher: It is simply mazing how different interactions with students can be when, after students say something that seems to indicate they just don't get it, "What would have to be true in order for that statement to be true?" I have learned a lot about student misconceptions and about the inaccuracy of the signals I send students in my lectures and conversations just by stepping back and thinking, "What would have to be true..."
Sometimes, our imaginations are too small for our own good, and we need a little boost to see the world as it really is. This technique gives us one.