May 30, 2007 4:27 PM

But Raise Your Hand First

Weinberg on assessing the value of a critic's comments:

Here's an excellent test to perform before listening to any critic, inside or outside:

What have they written that shows they have the credentials to justify the worth of their criticism?

This test excludes most high-school and college teachers of English, most of your friends, lots of editors and agents, and your mother.

It also excludes your [inner] four-year-old, who's never written anything.

Computer science students should show due respect to their professors (please!), but they might take this advice to heart when deciding how deeply to take criticism of their writing -- their programs. Your goal in a course is to learn something, and the professor's job is to help you. But ultimately you are responsible for what you learn, and it's important to realize that the prof's evaluation is just one -- often very good -- source. Listen, try to decide what is most valuable, learn, and move on. You'll start to develop your own tastes and skills that are independent of your the instructors criticism.

Weinberg's advice is more specific. If the critic has never written anything that justifies the worth of their criticism, then the criticism may not be all that valuable. I've written before about the relevance of a CS Ph.D. to teaching software development. Most CS professors have written a fair amount of code in their days, and some have substantial industry experience. A few continue to program whenever they can. But frankly some CS profs don't write much code in their current jobs, and a small group have never written any substantial program. As sad as it is for me to say, those are the folks whose criticism you sometimes simply have to take with a grain of salt when you are learning from them.

The problem for students is that they are not ideally situated to decide whose criticism is worth acting on. Looking for evidence is a good first step. Students are also not ideally situated to evaluate the quality of the evidence, so some caution is in order.

Weinberg's advice reminds me of something Brian Marick said, on a panel at the OOPSLA'05 Educators' Symposium. He suggested that no one be allowed to teach university computer science (or was it software development?) unless said person had been a significant contributor to an open-source software project. I think his motivation is similar to what Weinberg suggests, only broader. Not only should we consider someone's experience when assessing the value of that person's criticism, we should also consider the person's experience when assessing the value of what they are able to teach us.

Of course, you should temper this advice as well with a little caution. Even when you don't have handy evidence, that doesn't mean the evidence doesn't exist. Even if no evidence exists, that doesn't mean you have nothing to learn from the person. The most productive learners find ways to learn whatever their circumstances. Don't close the door on a chance to learn just because of some good advice.

So, I've managed to bring earlier threads together involving Brian Marick and Gerald Weinberg, with a passing reference to Elvis Costello to boot. That will have to do for closure. (It may also boost Brian's ego a bit.)


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 30, 2007 7:01 AM

Weinberg on Writing

Whenever asked to recommend "must read" books, especially on computing, I always end up listing at least one book by Gerald Weinberg -- usually his The Psychology of Computer Programming. He has written a number of other classic books, on topics ranging from problem solving and consulting to teamwork and leadership. Now in a new stage of his career, Weinberg has moved from technical consulting to more general writing, including science fiction novels. He's also blogging, both on writing and on consulting.

I feel a connection to his blogs these days because they match a theme in my own reading and writing lately: telling stories as a way to teach. Even when Weinberg was writing his great non-fiction books -- The Psychology of Computer Programming, of course, but also An Introduction to General Systems Thinking, The Secrets of Consulting, and Becoming a Technical Leader -- he was telling stories. He claims that didn't realize that right away (emphasis added):

I'd like to say that I immediately recognized that reading fiction is another kind of simulation, but I'm not that insightful. Only gradually did I come to realize that a great deal of the popularity of my non-fiction books (and the books of a few others, like Tom DeMarco) is in the stories. They make for lighter reading, and some people object to them, but overall, those of us who use stories manage to communicate lots of hard stuff. Why? Because a good story takes the reader into a trance where s/he can "experience" events just as they can in a teaching simulation.

One of my favorite undergraduate textbooks was DeMarco's Structured Analysis and System Specification, and one of the reasons I liked it so was that it was a great book to read: no wasted words, no flashy graphics, just a well told technical story with simple, incisive drawings. Like Weinberg, I'm not sure I appreciated why I liked the book so much then, but when I kept wanting to re-read it in later years I knew that there was something different going on.

But "just" telling stories is different from teaching in an important way. Fiction and creative writers are usually told not to "have a point". Having one generally leads to stories that seem trite or forced. A story with a point can feel like a bludgeon to the reader's sensibility. A point can come out of a good story -- indeed I think that this is unavoidable with the best stories and the best story-tellers -- but it should rarely be put in.

Teachers differ from other story tellers in this regard. They are telling stories precisely because they have a point. Usually, there is something specific that we want others to learn!

(This isn't always true. For example, when I teach upper-division project courses, I want students to learn how to design big systems. In those courses, much of what students learn isn't specific content but habits of thought. For this purpose, "stories without a point" are important, because they leave the learner more freedom to make sense of their own experiences.)

But most of the time, teachers do have a point to make. How should the teacher as story-teller deal with this difference? Weinberg faces it, too, because even with his fiction, he is writing to teach. Here is what he says:

"If you want to send a message, go to Western Union." ...

It was good advice for novelists, script writers, children's writers, and actors, but not for me. My whole purpose in writing is to send messages.... I would have to take this advice as a caution, rather than a prohibition. I would have to make my messages interesting, embedding them in compelling incidents that would be worth reading even if you didn't care about the messages they contained.

For teachers, I think that the key to the effective story is context: placing the point to be learned into a web of ideas that the student understands. A good story helps the student see why the idea matters and why the student should change how she thinks or behaves. In effect, the teacher plays the role of a motivational speaker, but not the cheerleading, rah-rah sort. Students know when they are being manipulated. They appreciate authenticity even in their stories.

Weinberg's blogs make for light but valuable reading. Having learned so much from his books over the years, I enjoy following his thinking in this conversational medium, and I find myself still learning.

But, in the end, why tell stories at all? I believe the Hopi deserve the last word:

"The one who tells the stories rules the world."

Well, at least they have a better chance of reaching their students, and maybe improving their student evaluations.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 27, 2007 4:54 PM

Waiting on the World to Change

me and all my friends
we're all misunderstood
they say we stand for nothing and
there's no way we ever could
now we see everything that's going wrong
with the world and those who lead it
we just feel like we don't have the means
to rise above and beat it


so we keep waiting
waiting on the world to change
-- John Mayer

I'm glad to know that Mr. Mayer and his friends care about the world they live in, but I'd like to suggest a different strategy than waiting.

The world changes when people change it.

So...

If you are a software developer waiting for a better working environment, where you feel confident moving forward and have fun delivering value to your customer: Write a test. Do the simplest thing that will make it pass. Refactor. Then do it again.

If you are an instructor waiting for a better classroom environment, where your students are engaged and you have fun working with them on the material they are learning: Pick one session of one class you teach. Eliminate the slides on any topic. Replace them with an interactive exercise. Try it in class, and make the exercise better based on the feedback.

If you are like any of us waiting for better health, for a life in which you wake up ready for the day and feel better throughout: Go for a walk. If you are already walking, throw in a block of jogging. If you are already jogging, throw in a little pick-up where you push your limits for 50 or 100m.

This isn't magic. The world will probably push back. Your tools may not support you; students may resist leaving their cocoon; your body will probably be a little sore tomorrow morning. Changes aren't usually care free. So stick with it through the initial resistance. That's the closest thing to magic there is.

Look for other people who are trying to change their worlds. Talk to them. You'll learn from them, and they'll learn from you.

Make that change.


Posted by Eugene Wallingford | Permalink | Categories: General

May 25, 2007 2:11 PM

Read My Blog

If you don't how will you know how clever I am?

Recently I wrote about the availability heuristic and how it may affect student behavior. Schneier tells us that this is often a useful rule of thumb, and it has served us well evolutionarily. But our changing world may be eroding its value, perhaps even making it dangerous in some situations:

But in modern society, we get a lot of sensory input from the media. That screws up availability, vividness, and salience, and means that heuristics that are based on our senses start to fail. When people were living in primitive tribes, if the idea of getting eaten by a saber-toothed tiger was more available than the idea of getting trampled by a mammoth, it was reasonable to believe that--for the people in the particular place they happened to be living--it was more likely they'd get eaten by a saber-toothed tiger than get trampled by a mammoth. But now that we get our information from television, newspapers, and the Internet, that's not necessarily the case. What we read about, what becomes vivid to us, might be something rare and spectacular. It might be something fictional: a movie or a television show. It might be a marketing message, either commercial or political. And remember, visual media are more vivid than print media. The availability heuristic is less reliable, because the vivid memories we're drawing upon aren't relevant to our real situation.

I sometimes wonder if my omnivorous blogging and promiscuous referencing of many different sources create a situation in which my readers attribute brilliance to me that rightly belongs to my sources.

A little part of my ego thinks that this would be okay. (You didn't read that here.)

However, if you finish the Schneier paragraph I quoted above, you will see that just the opposite is probably true:

And even worse, people tend not to remember where they heard something--they just remember the content. So even if, at the time they're exposed to a message they don't find the source credible, eventually their memory of the source of the information degrades and they're just left with the message itself.

So you'll remember the ideas I toss out, but you'll eventually forget that you read them here. And so you will not be able to blame me if it turns out to be nonsense...

Maybe you'd better not read my blog after all.


Posted by Eugene Wallingford | Permalink | Categories: Personal

May 24, 2007 7:48 AM

Formatting Text for Readability

Technology Changes, Humans Don't

Gaping Void ran the cartoon at the right last weekend, which is interesting, given that several of my recent entries have dealt with a similar theme. Technology may change, but humans -- at least our hard-wiring -- don't. We should take into account how humans operate when we work with them, whether in security, software development, or teaching.

In another coincidence, I recently came across a very cool paper, Visual-Syntactic Text Formatting: A New Method to Enhance Online Reading. We programmers spend an awful lot of time talking about indenting source code: how to do it, why to do it, tools for doing it, and so on. Languages such as Python require a particular sort of indentation. Languages such Scheme and Common Lisp depend greatly on indentation; the programming community has developed standards that nearly everyone follows and, by doing so, programmers can understand code whose preponderance of parentheses would otherwise blind them.

But the Walker paper is the first time I have ever read about applying this idea to text. Here is an example. This:

When in the Course of human events, it becomes necessary
for one people to dissolve the political bands which have
connected them with another, and to assume among the powers
of the earth, the separate and equal station to which the
Laws of Nature...

might become:

When in the Course
        of human events,
    it becomes necessary
        for one people
          to dissolve the political bands
            which have
              connected them with another,
          and to assume
              among the powers
                of the earth,
            the separate and equal station
              to which
                the Laws of Nature
...

Cognitively, this may make great sense, if our minds can process and understand text better presented when it is presented structurally. The way we present text today isn't much different in format than when we first started to write thousands of years ago, and perhaps it's time for a change. We shouldn't feel compelled to stay with a technology for purely historical reasons when the world and our understanding of it have advanced. (Like the world of accounting has with double-entry bookkeeping.)

For those of you who are still leery of such a change, whether for historical reasons, aesthetic reasons, or other personal reasons... First of all, you are in good company. I was once at a small session with Kurt Vonnegut, and he spoke eloquently of how the book as we know it now would never disappear, because there was nothing like the feel of paper on your fingertips, the smell of a book when you open its fresh pages for the first time. I did not believe him then, and I don't think even he believed that deep in his heart; it is nothing more than projecting our own experiences and preferences onto a future that will surely change. But I know just how he felt, and I see my daughters' generation already experiencing the world in a much richer, technology-mediated way than Vonnegut or I have.

Second, don't worry. Even if what Walker and his colleagues describe becomes a standard, I expect a structured presentation to simply be one view on the document out of many possible views. As an old fogey, I might prefer to read my text in the old paragraph-structured way, but I can imagine that having a syntactically-structured view would make it much easier to scan a document and find something more easily. Once I find the passage of interest, I could toggle back to a paragraph-structured view and read to my hearts content. And who knows; I might prefer reading text that is structured differently, if only I have the chance.

Such toggling between views is possible because of... computer science! The same theory and techniques that make it possible to do this at all makes it possible to do however you like. Indeed, I'll be teaching many of the necessary techniques this fall, as a part of building the "front end to a program compiler. The beauty of this science is that we are no longer limited by someone else's preferences, or by someone else's technology. As I often mention here, this is one of the great joys of being a computer scientist: you can create your own tools.

We can now see this technology making it out to general public. I can see the MySpace generation switching to new ways of reading text immediately. If it makes us better readers, and more prolific readers, then we will have a new lens on an old medium. Computer science is a medium-maker.

Of course, this particular project is just a proposal and in the early stages of research. Whether it is worth pursuing in its current form, or at all, depends on further study. But I'm glad someone is studying this. The idea questions assumptions and historical accident, and it uses what we have learned from cognitive science and medical science to suggest a new way to do something fundamental. As I said, very cool.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

May 23, 2007 8:03 AM

The Strange and the Familiar

Artists and other creative types often define their artistic endeavor obliquely as "to make the familiar strange, and the strange familiar". I've seen this phrase attributed to the German poet Novalis, who coined it as "the essence of romanticism". You may have seen me use half of the phrase in my entry on a recent talk by Roy Behrens.

Recently, I began to wonder... Is this what teaching is!?

In a sense, the second half of the definition is indeed one of the teacher's goals: to help students understand ideas and use techniques that are, at the beginning of a course, new or poorly understood. The strange becomes familiar when it becomes a part of how we understand and think about about our worlds.

But I think the first part of the definition -- to make the familiar strange -- is important, too, sometimes more important. Often the greatest learning occurs when we confront an idea that we think we understand, which seems to hold nothing new for us, which seems almost old, and are led beneath the surface to a wrinkle we never new existed. Or when we are led to where the idea intersects with another in a way we never considered before and find that the old idea opens new doors. We find that our old understanding was incomplete at best and wrong at worst.

Many of the courses I am fortunate enough to teach on are replete with opportunities both to make the strange familiar and to make the familiar strange. Programming Languages and Algorithms are two. So are Object-Oriented Programming and Artificial Intelligence. Frankly, so, too, is any course that we approach with open hearts and minds.

Teachers do what artists do. They just work in a different medium.

(A little googling finds that Alistair Cockburn wrote on this phrase last year. There is so much to read and know!)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 22, 2007 3:55 PM

Someone Competent to Write Code

Students sometimes say to me:

I don't have to be good at <fill in the blank>. I'll be working on a team.

The implication is that the student can be good enough at one something and thus not have to get good at some other part of the discipline. Usually the skill they want to depend is a softer skill, such as communication or analysis, The skill they want to avoid mastering is something they find harder, usually a technical skill and -- all too often -- programming.

First, let me stipulate that not everyone master every part of computer science of software development. But this attitude usually makes some big assumptions about whether a company should want to entrust systems analysis or even "just" interacting with clients. I always tell students that many people probably won't want them on their teams if they aren't at least competent at all phases of the job. You don't have to great at <fill in the blank>, or everything, but you do have to be competent.

I was reminded of this idea, which I've talked about here at least once before when I ran across Brian Marick quoting an unnamed programmer:

What should teams do with the time they're not spending going too fast? They should invest in one of the four values I want to talk about: skill. As someone who wants to remain anonymous said to me:
I've also been tired for years of software people who seem embarrassed to admit that, at some point in the proceedings, someone competent has to write some damn code.

He's a programmer.

This doesn't preclude specialization. Maybe each team has one someone competent has to write some damn code. But most programmers who have been in the trenches are skeptical of working with teammates who don't understand what it's like to deliver code. Those teammates can make promises to customers that can't be met, and "design system architectures" that are goofy at best and unimplementable at worst.

One of the things I like about the agile methods is that they try to keep all of the phases of software development in focus at once, on roughly an even keel. That's not how some people paint Agile when they talk it down, but it is one of the critical features I've always appreciated. And I like how everyone is supposed to be able to contribute in all phases -- not as masters, necessatily, but as competent members of the team.

This is one of the ideas that Brian addresses in the linked-to article, which talks about the challenge facing proponents of so-called agile software development in an age when execution is more important than adoption. As always, Brian writes a compelling story. Read it. And if you aren't already following his blog in its new location, you should be.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 21, 2007 4:45 PM

More on Metaphors for Refactoring

Not enough gets said about the importance of abandoning crap.

-- Ira Glass
, at Presentation Zen

Keith Ray wrote a couple of entries last month on refactoring. The first used the metaphor of technical debt and bankruptcy. The second used the simile of refactoring as like steering a car.

In my experience and that of others I've read, the technical debt metaphor works well with businesspeople. It fits well into their world view and uses the language that they use to understand their business situation. But as I wrote earlier, I don't find that this works all that well with students. They don't live in the same linguistic and experiential framework as businesspeople, and the way people typically perceive risk biases them against being persuaded.

A few years ago Owen Astrachan, Robert Duvall, and I wrote a paper called Bringing Extreme Programming to the Classroom that originally appeared at XP Universe 2001 and was revised for inclusion in Extreme Programming Perspectives. In that paper, we described some of the micro-examples we used at that time to introduce refactoring to novice students. My experience up to then and since has been that students get the idea and love the "a-a-a-ahhh" that comes from a well-timed refactor, but that most students do not adopt the new practice as a matter of course. When they get into the heat of a large project, they either try to design everything up front (and usually guess wrong, of course) or figure they can always make do with whatever design they currently have, whether designed or accreted.

Students simply don't live with most code long enough, even on a semester-long project, to come face-to-face with technical bankruptcy. When they, they declare it and make do. I think in my compilers course this fall I'm going to try to watch for the first opportunity to help one of the student groups regain control of their project through refactoring, perhaps even as a demonstration for the whole class. Maybe that will work better.

That said, I think that Ray's steering wheel analogy may well work better for students than the debt metaphor. Driving is an integral part of most students' lives, and maybe we can connect to their ongoing narratives in this way. But the metaphor will still have to be backed up with a lot of concrete practice that helps them make recognizable progress. So watching for an opportunity to do some macro-level refactoring is still a good idea.

Another spoke in this wheel is helping students adopt the other agile practices that integrate so nicely with refactoring. As Brian Marick said recently in pointing out values missing from the Agile Manifesto,

Maybe the key driver of discipline is the practice of creating working software -- running, tested features -- at frequent intervals. If you're doing that, you just don't have time for sloppiness. You have to execute well.

But discipline -- especially discipline that conflicts with one's natural, internal, subconscious biases -- is hard to sell. In a semester-long course, by the time students realize they really did need that discipline, time is too short to recover properly. They need time to ingrain new practice as habit. For me as instructor, the key is "small", simple practices that people can do without high discipline, perhaps with some external guidance until their new habit forms. Short iterations are something I can implement as an instructor, and with enough attention paid throughout the term and enough effort exerted at just the right moments, I think I can make some headway.

Of course, as Keith reminds us and as we should always remember when trafficking in metaphor: "Like all analogies, there's a nugget of truth here, but don't take the analogy too far."


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 20, 2007 3:14 PM

Good and Bad Use

Recently I wrote about persuasion and teaching, in light of what we know about how humans perceive and react to risk and new information. But isn't marketing inherently evil, in being motivated by the seller's self-interest and not the buyer's, and thus incompatible with a teacher/student relationship? No.

First of all, we can use an idea associated with a "bad" use to achieve something good. Brian Marick points out that the motivating forces of XP are built in large part on peer pressure:

Some of XP's practices help with discipline. Pair programming turns what could be a solitary vice into a social act: you and your pair have to look at each other and acknowledge that you're about to cheat. Peer pressure comes into play, as it does because of collective code ownership. Someone will notice the missing tests someday, and they might know it was your fault.

This isn't unusual. A lot of social organizations provide a former of positive peer pressure to help individuals become better, and to create a group that adds value to the world. Alcoholics Anonymous is an example for people tempted to do something they know will hurt them; groups of runners training for a marathon rely on one another for the push they need to train on days they feel like not and to exert the extra effort they need to improve. Peer pressure isn't a bad thing; it's just depends on who you choose for your peers.

Returning to the marketing world, reader Kevin Greer sent me a short story on something he learned from an IBM sales trainee:

The best sales guy that I ever worked with once told me that when he received sales training from IBM, he was told to make sure that he always repeated the key points six times. I always thought that six times was overkill but I guess IBM must know what they're talking about. A salesman is someone whose income is directly tied to their ability to effectively "educate" their audience.

What we learn here is not anything to do with the salesman's motive, but with the technique. It is grounded in experience. Teachers have heard this advice in a different adage about how to structure a talk: "Tell them what you are about to tell them. Then tell them. Then tell them what you have just told them." Like Kevin, I felt this was overkill when I first heard it, and I still rarely follow the advice. But I do know from experience how valuable it can be me, and in the meantime I've learned that how the brain works makes it almost necessary.

While I'm still not a salesman at heart, I've come to see how "selling" an idea in class isn't a bad idea. Steve Pavlina describes what he calls marketing from your conscience. His point ought not seem radical: "marketing can be done much more effectively when it's fully aligned (i.e., congruent) with one's conscience."

Good teaching is not about delusion but about conscience. It is sad that we are all supposed to believe the cynical interpretation of selling, advertising, and marketing. Even in the tech world we certainly have plenty of salient reasons to be cynical. We've all observed near-religious zealotry in promoting a particular programming language, or a programming style, or a development methodology. When we see folks shamelessly shilling the latest silver bullet as a way to boost their consulting income, they stand out in our minds and give us a bad taste for promotion. (Do you recognize this as a form of the availability heuristic?)

But.

I have to overcome my confirmation bias, other heuristic biases that limit my thinking, and my own self-interest in order to get students and customers to gain the knowledge that will help them; to try new languages, programming styles, and development practices that can improve their lives. What they do with these is up to them, but I have a responsibility to expose them to these ideas, to help them appreciate them, to empower them to make informed choices in their professional (and personal!) lives. I can't control how people will use the new ideas they learn with me, or if they will use them at all, but if help them also to learn how to make informed choices later, then I've done about the best I can do. And not teaching them anything isn't a better alternative.

I became a researcher and scholar because I love knowledge and what it means for people and the world. How could I not want to use my understanding of how people learn and think to help them learn and think better, more satisfyingly?


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

May 17, 2007 11:08 AM

Quick Hits

Over the last couple of months, I've been collecting some good lines and links to the articles that contain them. Some of these may show up someday in something I write, but it seems a shame to have them lie fallow in a text file until then. Besides, my blog often serves as my commonplace book these days. All of these pieces are worth reading for more than the quote.

If the code cannot express itself, then a comment might be acceptable. If the code does not express itself, the code should be fixed.
-- Tim Ottinger, Comments Again

In a concurrent world, imperative is the wrong default!
-- Tim Sweeney of Epic Games, The Next Mainstream Programming Language: A Game Developer's Perspective, an invited talk at ACM POPL'06 (full slides in PDF)

When you are tempted to encode data structure in a variable name (e.g. Hungarian notation), you need to create an object that hides that structure and exposes behavior.
-- Uncle Bob Martin The Hungarian Abhorrence Principle

Lisp... if you don't like the syntax, write your own.
-- Gordon Weakliem, Hashed Thoughts, on simple syntax for complex data structures

Pairing is a practice that has (IIRC) at least five different benefits. If you can't pair, then you need to find somewhere else in the process to put those benefits.
-- John Roth, on the XP mailing list

Fumbling with the gear is the telltale sign that I'm out of practice with my craft. ... And day by day, the enjoyment of the craft is replaced by the tedium of work.
-- Mike Clark, Practice

So when you get rejected by investors, don't think "we suck," but instead ask "do we suck?" Rejection is a question, not an answer.
-- Paul Graham, The Hacker's Guide to Investors

Practice. Question rejection.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Software Development

May 16, 2007 3:53 PM

All About Stories

Telling Stories, by Garmash

I find it interesting that part of what I learned again from Schneier's psych of risk paper leads to stories. But biases in how we think, such as availability and framing, make the stories we tell important -- if we want them to reach our audience as intended. Then again, perhaps my direction in this series follows from a bias in my own mind: I had been intending to blog about a confluence of stories about stories for a few weeks.

First, I was sitting in on lectures by untenured and adjunct faculty this semester, doing year-end evaluations. In the middle of one lecture, it occurred to me: The story matters. A good lecture is a cross product of story and facts (or data, or knowledge).

What if a lecture is only good as a story? It is like empty calories from sugar. We feel good for a while, but pretty soon we feel an emptiness. Nothing of value remains.

What if a lecture is only good for its facts? I see this often, and probably do this all too often. Good slides, but no story to make the audience care. The result is no interest. We may gain something, but we don't enjoy it much. And Schneier tells us that we might not even gain that much -- without a story that makes the information available to us, we may well forget it.

Soon after that, I ran across Ira Glass: Tips on storytelling at Presentation Zen. Glass says that the basic building blocks of a good story are the anecdote itself, which raises an implicit question, and moments of reflection. which let the user soak in the meaning.

Soon after that, I was at Iowa State's HCI forum and saw a research poster on the role of narrative in games and other virtual realities. It referred to the Narrative Paradigm of Walter Fisher (unexpected Iowa connection!), which holds that "All meaningful communication is a form of storytelling." And: "People experience and comprehend their lives as a series of ongoing narratives." (emphasis added)

Then, a couple of weeks later, I read the Schneier paper. So maybe I was predisposed to make connections to stories.

Our audiences -- software developers, students, business people -- are all engaged in their own ongoing narratives. How do we connect what we are teaching with one of their narratives? When we communicate Big Ideas, we might even strive to create a new thread for them, a new ongoing narrative that will define parts of their lives. I know that OOP, functional programming, and agile techniques do that for developers and students. The stories we tell help them move in that direction.

Some faculty seem to make connections, or create new threads. Some "just" lecture. Others do something more interactive. These are the faculty whom students want to take for class.

Others don't seem to make the connections. The good news for them -- for me -- is that one can learn how to tell stories better. The first step is simply to be aware that I am telling a story, and so to seek the hook that will make an idea memorable, worthwhile, and valuable. A lot of the motivation lies with the audience, but I have to hold up my end of the bargain.

Not just any story will do. People want a story that helps them do something. I usually know when my story isn't hitting the mark; if not before telling it, then after. The remedy usually lies in one of two directions: finding a story that is about my audience and not (just) me, or making my story real by living it first. Real problems and real solutions mean more than an concocted stories about abstract ideas of what might be.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 15, 2007 7:48 PM

Starting Over, Again

Good News: I ran 3 miles this morning.

Bad News: That this is good news.

Good News: I was smart enough not to overdo it.

Bad News: That I had to hold myself back.

Good News: I think I can do it again tomorrow.

I've just gone through a third bout this year of some ailment that held me under the weather for a prolonged period. The first was the worst, with two weeks of two runs each, a week with a single run, and then two weeks off entirely. Just as I was getting back to normal and working my way back onto the track, I lost a week to the same fuzzy head and persistent fatigue. The third hit me soon after I returned from the workshop at Duke -- air travel often seems to hit me these days -- and kept me off the road for nearly two weeks. It was hard, because the weather has turned to spring and the mornings have been wonderful. But it was also easy, because I was simply too tired.

This morning I was careful not to try to run more. Patience is a virtue when getting started. It is better to have another three miles tomorrow than to be foolhardy today.

Bad News : My total mileage this year before this morning's run, May 15 == 347.6. That may sound like a lot, but last year I reached 350 miles during my long run on March 5. It's also bad news because it means I must have been sick, else I wouldn't not run.

Good News: My body is probably fresher at this point in the year than it has been in at least three years. Running lots of miles gets me into shape, but it also wears on the body. I hope that my fresher legs -- and mind -- will be useful as I train this summer. In the short term, my lack of miles and fitness will almost certainly result in a slower half-marathon time at the end of June. In the long term, it may help me be fresher at marathon time, at the end of October.

Oh, and I'm in... So I have a target to shoot for.

But right now, I just want to run.


Posted by Eugene Wallingford | Permalink | Categories: Running

May 14, 2007 7:25 PM

Persuasion, Teaching, and New Practice

I have written three posts recently [ 1 | 2 | 3 ] on various applications of Bruce Schneier's The Psychology of Security to software development and student learning. Here's another quote:

The moral here is that people will be persuaded more by a vivid, personal story than they will by bland statistics and facts, possibly solely due to the fact that they remember vivid arguments better.

I think that this is something that many of us know intuitively from experience both as learners and as teachers. But the psychological evidence that Schneier cites give us all the more reason to think carefully about many of the things we do. Consider how it applies to...

... "selling" agile ideas, to encouraging adoption among the developers who make software. The business people who make decisions about the making of software. The students who learn how to make software from us.

... "marketing" CS as a rewarding, worthwhile, challenging major and career path.

... "framing" ideas and practices for students whom we hope to help grow in some discipline.

Each of these audiences responds to vivid examples, but the examples that persuade best will be stories that speak to the particular desires and fears of each. Telling personal stories -- stories from our own personal experiences -- seem especially persuasive, because they seem more real to the listener. The listener probably hasn't had the experience we relate, but real stories have a depth to them that often can't be faked.

I think my best blogging fits this bill.

As noted in one of the earlier entries, prospect theory tells us that "People make very different trades-offs if something is presented as a gain than if something is presented as a loss." I think this suggests that we must frame arguments carefully and explicitly if we hope to maximize our effect. How can we frame stories most beneficially for student learning? Or to maximize the chance that developers adopt, or clients accept, new agile practices?

I put the words "selling", "marketing", and "framing" in scare quotes above for a reason. These are words that often cause academics great pause, or even lead academics to dismiss an idea as intellectually dishonest. But that attitude seems counter to what we know about how the human brain works. We can use this knowledge positively -- use our newfound powers for good -- or negatively -- for evil. It is our choice.

Schneier began his research with the hope of using what he learned for good, to help humans understand their own behavior better and so to overcome their default behavior. But he soon learned that this isn't all that likely; our risk-intuitive behaviors are automatic, too deeply-ingrained. Instead he hopes to pursue a middle road -- bringing our feelings of security into congruence with the reality of security. (For example, he now admits that palliatives -- measures that make users feel better without actually improving their security -- may be acceptable, if they result in closer congruence between feeling and reality.)

This all reminded me of Kathy Sierra's entry What Marketers Could Do For Teachers. There, she spoke the academically incorrect notion that teachers could learn from marketers because they:

  • "know what turns the brain on"
  • "know how to motivate someone almost instantly"
  • "know how to get--and keep--attention"
  • "spend piles of money on improving retention and recall"
  • "know how to manipulate someone's thoughts and feelings about a topic"

"Manipulate someone's thoughts and feelings about a topic." Sounds evil, or at least laden with evil potential. Sierra acknowledges the concern right up front...

[Yes, I'm aware how horrifying this notion sound -- that we take teachers and make them as evil as marketers? Take a breath. You know that's not what I'm advocating, so keep reading.]

Kathy meant just what Schneier is trying to do, that we can learn not the motivations of marketing but their understanding of the human mind, the human behavior that makes possible the practices of marketing. Our motivations are already good enough.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 11, 2007 9:29 AM

Fish is Fish

Yesterday's post on the time value of study reminded me a bit of Aesop's fable The Ant and the Grasshopper. So perhaps you'll not be surprised to read a post here about a children's book.

cover image of Fish is Fish

While at workshop a couple of weeks ago, I had the pleasure of visiting my Duke friends and colleagues Robert Duvall and Owen Astrachan. In Owen's office was a copy of the book Fish Is Fish, by well-known children's book author Leo Lionni. Owen and Robert recommended the simple message of this picture book, and you know me, so... When I got back to town, I checked it out.

The book's web site summarizes the book as:

A tadpole and a minnow are underwater friends, but the tadpole grows legs and explores the world beyond the pond and then returns to tell his fish friend about the new creatures he sees. The fish imagines these creatures as bird-fish and people-fish and cow-fish and is eager to join them.

The story probably reaches young children in many ways, but the first impression it left on me was, "You can't imagine what you can't experience." Then I realized that this was both an overstatement of the story and probably wrong, so I now phrase my impression as, "How we imagine the the rest of the world is strongly limited by who we are and the world in which we live." And this theme matters to grown-ups as much as children.

Consider the programmer who knows C or Java really well, but only those languages. He is then asked to learn functional programming in, say, Scheme. His instructor describes higher-order procedures and currying, accumulator passing and tail-recursion elimination, continuations and call/cc. The programmer sees all these ideas in his mind's eye as C or Java constructs, strange beasts with legs and fins.

Or consider the developer steeped in the virtues and practices of traditional software engineering. She is then asked to "go agile", to use test-first development and refactoring browsers, pair programming and continuous integration, the planning game and YAGNI. The developer is aghast, seeing these practices in her mind's eye from the perspective of traditional software development, horrific beasts with index cards and lack of discipline.

When we encounter ideas that are really new to us, they seem so... foreign. We imagine them in our own jargon, our own programming language, our own development style. They look funny, and feel unnecessary or clunky or uncomfortable or wrong.

But they're just different.

Unlike the little fish in Lionni's story, we can climb out of the water that is our world and on to the beach of a new world. We can step outside of our experiences with C or procedural programming or traditional software engineering and learn to live in a different world. Smalltalk. Scheme. Ruby. Or Erlang, which seems to have a lot of buzz these days. If we are willing to do the work necessary to learn something new, we don't flounder in a foreign land; we make our universe bigger.

Computing fish don't have to be (just) fish.

----

(Ten years ago, I would have used OOP and Java as the strange new world. OO is mainstream now, but -- so sadly -- I'm not sure that real OO isn't still a strange new world to most developers.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

May 10, 2007 4:04 PM

Internalization as Investment

Maybe I am making too much of this and this. Reader Chris Turner wrote to comment that not internalizing is natural:

The things that I internalize are things I personally use. ... "Use" != "Have to know for a quiz". The reason I internalize them is because it's faster to remember than to look it up. If I hardly ever use it, though, the time spent learning it is wasted time. YAGNI as applied to knowledge, I suppose. ... [E]specially in the software development field, that this is not only acceptable, but encouraged. I simply don't have enough time to learn all I can about every design pattern out there. I have, however, internalized several that have been useful to me in particular.

I agree that one will -- and needs to -- internalize only what one uses on a regular basis. So as an instructor I need to be sure to give students opportunities to use the ideas that I hope for them to internalize. However, I am often disappointed that, even in the face of these opportunities, students seem to choose other activities (or no programming activities at all) and thus miss the chance to internalize an important idea. I guess I'm relying on the notion that students can trust that the ideas I choose for them are worth learning. People who bothered to master a theoretical construct like call/cc were able to create the idea of a continuation-based web server, rather than having to wait to be a third-generation adopter of a technology created, pre-processed, and commoditized by others.

But there's more. As one of my colleagues said yesterday, part of becoming educated is learning how to internalize, through conscious work. Perhaps we need to do a better job helping students to understand that this is one of the goals we have for them.

This leads me to think about another human bias documented in Schneier's article psychology article. I think that student behavior is also predicted by time discounting, the practice of valuing a unit of resource today more than the same unit of resource at a future date. In the financial world, this makes great sense, because a dollar today can be invested and earn interest, thus becoming more than $1 at the future date. The choice we face in valuing future resources is in predicting the interest rate.

I think that many of us, especially when we are first learning to learn, underestimate the "interest rate", the rate of return on time spent studying and learning-by-programming. Investment in a course early in the semester, especially when learning a new language or programming style, is worth much more than an equivalent amount of time spent later preparing for assignments and exams. And just as in the financial world, investing more time later often cannot make up for the ground lost to the early, patient investor. A particularly self-aware student once told me that he had used what seemed to be the easy first few weeks of my Programming Languages course to dive deep into Scheme and functional programming on his own. Later, when the course material got tougher, he was ready. Other students weren't so lucky, as they were still getting comfortable with syntax and idiom when they had to confront new content such as lexical addressing.

I like the prospect of thinking about the choice between internalizing and relying on external cues in terms innate biases on risk and uncertainty. This seems to give me a concrete way to state and respond to the issue -- to make changes in how I present my classes that can help students by recognizing their subconscious tendencies.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 10, 2007 11:28 AM

Student Learning as Confronting Risk

Last time, I wrote about how some ideas on human psychology, from Bruce Schneier's The Psychology of Security paper. Part way through, Schneier jokes:

(If you read enough of these studies, you'll quickly notice two things. One, college students are the most common test subject. And two, any necessary props are most commonly purchased from a college bookstore.)

University psychology researchers are as lazy as university computer scientists, I guess.

Schneier doesn't mention a question that seems obvious to me: Does this common test audience create a bias in the results produced? If college students are not representative, then the results from these studies may not tell us much about other kinds of peoples' behaviors! In many ways, college students are not representative of the rest of the world. They are at a nexus in development, different from teenagers at home but typically not yet living under the same set of constraints as people in the working world.

But I'm not too worried. Enough other studies on risk and probabilistic reasoning have been done with adult subjects, and they give similar results. That isn't too surprising, because what we are testing here doesn't involve reflective choices that are conditioned by development or culture, but rather reactions to conditions. These reactions are largely reflexive, under the control of the amygdala, "a very primitive part of the brain that ... sits right above the brainstem.... [It] is responsible for processing base emotions that come from sensory inputs, like anger, avoidance, defensiveness, and fear."

But overthinking Schneier's joke got me to thinking something else: how do these ideas apply to students in their own world, where they score points for grades in a course, make choices about what they need to know, and incur the costs of studying something now or later?

Prospect theory tells us that people prefer sure gains to potential gains, and potential losses to sure losses. I often observe students exhibit two behaviors consistent with these biases:

  • When the choice is between using a concept, technique, or language construct that is already understand and using a new idea that will require some work but which offers potential long-term benefits, most students opt for the sure gain.
  • Like the refactoring example from last time, when the choice is between taking the hit now to clean up a design or program or taking the chance on the current version, with a potentially bigger loss, most students opt for the potential loss.

There may be simpler emotional explanations for these behaviors, but I am thinking about them in a way way in light of Schneier's article.

Like software developers in general, students certainly fall victim to optimism bias. I've always figured that, when students gamble on getting more work done than they reasonably can in a short period, they were reacting to a world of scarce resources, especially time. But now I see that whatever conscious choice they make in this regard is reinforced or even precipitated by a primitive bias toward optimism. Their world of scarce resources is in many ways much like the conditions under which this bias evolved. Further, this bias is probably reinforced by the fact that college CS students are just the sort who have been successful at playing the game this way for many years, and who have avoided the train wrecks that plagued lesser students in high school or in less challenging majors. It must be a shock to have a well-trained optimism bias and then run into something like call/cc. Suddenly, the glass isn't half full after all.

A few posts back. I wrote about reliance on external references. For students, I think that this turns out to be dangerous for another reason, related to another tendency that Schneier documents: the availability heuristic. This refers to the tendency that humans have to "assess the frequency of a class or the probability of an event by the ease with which instances ... can be brought to mind". People are overly influenced by vivid, memorable instances. When they have encountered only a few instances in the first place, I think they are also overly influenced by the instances in that small set. An instructor can go to great lengths to expose students to representative exemplars, but that small set will also have the potential to mislead when relied on too heavily.

Relying on external references to recall syntax is one thing; it will usually work out just fine, even if its is unacceptably slow, especially in the context of an exam. But relying on triggers for more general problem solving can create problems all its own... The most vivid, most memorable, or only instances you've seen will bias your results. I am a strong proponent of reasonable from examples, a lá case-based reasoning, but this requires a disciplined use of a reliable "similarity metric". Students often don't have a reliable enough similarity metric in hand, and they often haven't learned yet to use it in a disciplined way. They tend to select the past example that they remember -- or understand!! -- the best, regardless of how well it applies in the current context. The result is often a not-so-good solution and a disillusioned student.

Thinking these thoughts will help me teach better.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 08, 2007 7:54 PM

Risk in Delivering Software

You ever notice
how anyone driving slower than you is an idiot,
and anyone driving faster than you is a maniac?

-- George Carlin

I spent my time flying back from Montreal reading Bruce Schneier's popular article The Psychology of Security and had a wonderful time. Schneier is doing want any good technologist should do: try to understand how the humans who use their systems tick. The paper made me harken back to my days studying AI and reasoning under uncertainty. One of the things we learned then is that humans are not very good at reasoning in the face of uncertainty. and most don't realize just how bad they are. Schneier studies the psychology of risk and probabilistic reasoning with the intention of understanding how and why humans so often misjudge values and trade-offs in his realm of system security. As a software guy, my thoughts turned in different directions. The result will be a series of posts.

To lead off, Schneier describes a couple of different models for how humans deal with risk. Here's the standard story he uses to ground his explanation:

Here's an experiment .... Subjects were divided into two groups. One group was given the choice of these two alternatives:
  • Alternative A: A sure gain of $500.
  • Alternative B: A 50% chance of gaining $1,000.


The other group was given the choice of:

  • Alternative C: A sure loss of $500.
  • Alternative D: A 50% chance of losing $1,000.

The expected values of A and B are the same, likewise C and D. So we might expect people in the first group to choose A 50% of the time and B 50% of the time, likewise C and D. But some people prefer "sure things", while others prefer to gamble. According to traditional utility theory from economics, we would expect people to choose A and C (the sure things) at roughly the same rate, and B and D (the gambles) at roughly the same rate. But they don't...

But experimental results contradict this. When faced with a gain, most people (84%) chose Alternative A (the sure gain) of $500 over Alternative B (the risky gain). But when faced with a loss, most people (70%) chose Alternative D (the risky loss) over Alternative C (the sure loss).

This gave rise to something called prospect theory, which "recognizes that people have subjective values for gains and losses". People have evolved to prefer sure gains to potential gains, and potential losses to sure losses. If you live in a world where survival is questionable and resources are scarce, this makes a lot of sense. But it also leads to interesting inconsistencies that depend on our initial outlook. Consider:

In this experiment, subjects were asked to imagine a disease outbreak that is expected to kill 600 people, and then to choose between two alternative treatment programs. Then, the subjects were divided into two groups. One group was asked to choose between these two programs for the 600 people:
  • Program A: "200 people will be saved."
  • Program B: "There is a one-third probability that 600 people will be saved, and a two-thirds probability that no people will be saved."


The second group of subjects were asked to choose between these two programs:

  • Program C: "400 people will die."
  • Program D: "There is a one-third probability that nobody will die, and a two-thirds probability that 600 people will die."

As before, the expected values of A and B are the same, likewise C and D. But in this experiment A==C and B==D -- they are just worded differently. Yet human bias toward sure gains to and potential losses holds true, and we reach an incongruous result: People overwhelmingly prefer Program A and Program D in their respective choices!

While Schneier looks at how these biases apply to the trade-offs we make in the world of security, I immediately began thinking of software development, and especially the so-called agile methods.

First let's think about gains. If we think not in terms of dollars but in terms of story points, we are in a scenario where gain -- an additive improvement to our situation -- is operative. It would seem that people out to prefer small, frequent releases of software to longer-term horizons. "In our plan, we can guarantee delivery of 5 story points each week, determined weekly as we go along, or we can offer an offer a 60% chance of delivering an average of 5 story points a week over the next 12 months." Of course, "guaranteeing" a certain number of points a week isn't the right thing to do, but we can drive our percentage up much closer to 100% the shorter the release cycle, and that begins to look like a guarantee. Phrased properly, I think managers and developers ought to be predisposed by their psychology to prefer smaller cycles. That is the good bet, evolutionarily, in the software world; we all know what vaporware is.

What about losses? For some reason, my mind turned to refactoring here. Now, most agile developers know that refactoring is a net gain, but it is phrased in terms of risk and loss (of immediate development time). Phrased as "Refactor now, or maybe pay the price later," this choice falls prey to human bias preference for potential losses over sure losses. No wonder selling refactoring in these terms is difficult! People are willing to risk carrying design debt, even if they have misjudged the probability of paying a big future cost. Maybe design debt and the prospect of future cost is the wrong metaphor for helping people see the value of refactoring.

But there is one more problem: optimism bias. It turns out that people tend to believe that they will outperform other people engaged in the same activity, and we tend to believe that more good will happen to us than bad. Why pay off design debt now? I'll manage the future trajectory of the system well enough to overcome the potential loss. We tend to underestimate both the magnitude of coming loss and the probability of incurring a loss at all. I see this in myself, in many of my students, and in many professional developers. We all think we can deliver n LOC this week even though other teams can deliver only n/2 LOC a week -- and maybe even if we delivered n/2 LOC last week. Ri-i-i-i-ight.

There is a lot of cool stuff in Schneier's paper. It offers a great tutorial on the fundamentals of human behavior and biases when reacting to and reasoning about risk and probability. It is well worth a read for how these findings apply in other areas. I plan a few more posts looking at applications in software development and CS education.


Posted by Eugene Wallingford | Permalink | Categories: Software Development

May 05, 2007 9:46 PM

Students Scoring Points

Jimmy McGinty: You know what separates the winners from the losers?
Shane Falco: The score.

-- from The Replacements

Shane Falco and Jimmy McGinty in 'The Replacements'

Seeing one of my favorite sports-cliché films again, this time in French (welcome to Montreal!) prompts me to continue my recent series of posts on student attitudes with an entry I've had in my "to write" queue for a few weeks.

The April 13 issue of the The Chronicle of Higher Education contained an interesting article by Walter Tschinkel, a biology prof at Florida State University, called Just Scoring Points. (Hurry to read the full article now... The Chronicle has let down its pay wall through May 8.)

Tschinkel argues that the common metaphors we use for teaching and learning -- filling empty vessels and building an edifice -- lead us in the wrong direction from how students actually think. He offers instead a sports metaphor:

When you play a sport, your preparation reaches a crescendo just before a match (exam). If you win the match (exam), you get points (grades) in proportion to your placement. You keep track of those points, strategizing about how to get more next time. The match leaves no residue other than the points. At the end of college, you enter the working world with your overall standing (grade-point average) and little more.

The metaphor isn't perfect, but it doesn't need to be. It only has to offer us some insight we might otherwise not have had. The points analogy explains why so many students want to know, "Is this going to be on the test?", and why a study guide prepared by the teacher is more appealing than a study guide they prepare for themselves. It accounts for how this semester so many students can not know something they demonstrated knowing just last semester.

And worse -- or should I say "better"!? -- this metaphor may give us insight into how we instructors contribute to the problem in how we ask questions, assign work, and evaluate our students' performance. We treat grades like times in a track meet or points on the basketball court,so why shouldn't our students. Maybe they are just adapting to the strange world we immerse them in. If we focused on competencies instead, they might take the knowledge they might accrue more seriously, too.

This article offered another near-coincidence with my recent blogging, related to my observation about lack of internalization. Tschinkel describes giving a pop quiz on material he covered meticulously and unambiguously in a previous lecture. Only a quarter of his students knew the answer. He explained the material again and then gave them another pop quiz the next session. 35 percent. He explained the material yet again and gave them another pop quiz a week later. The result? 60 percent -- the same percentage who answered the same question on the final exam. That was in his freshman course; in his upper-division course, everyone finally gave the right answer -- on the fourth iteration. Sigh.

My experiment turned out better. Every student in my programming languages course ran short of time on their third scheduled quiz. I decided that it was better to find out what they know than to find out only that they needed more time, so when they arrived for our next class I handed them their quiz papers unannounced and gave them 15 more minutes to finish their answers. I told them that they were also free to change any answer they wished, even if they had gone home to look the answer in the meantime. That seemed like a net win: to score those points, students would have had to care enough to look up the answers after the fact! All but one student put their unexpected time to good use, and quiz scores turned out pretty well. Not as well as I might have hoped, but better.

Tschinkel's prescription is one most of us already know is good for us, even if we don't always practice what we know: stop lecturing all the time, ask students questions that require understanding to answer, and integrate material throughout our curriculum rather than teaching a bunch of artificially stand-alone courses. Reading, writing, and discussing material takes more time, so we will probably cover less content, but remember: that's okay. Our students might actually learn more.

Sometimes a fun little movie can be more than sugar. It can remind us of an unexamined metaphor we live by. Even if it does so in French:

Vous savez ce qui sépare les gagnants des perdants? Les points.

(No, I don't speak French, but Google does.)


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 05, 2007 1:32 PM

Q School, Taking Exams, and Learning Languages

My previous entry has been a slowly-dawning realization. On Thursday, I felt another trigger in a similar vein. I heard an interview with John Feinstein, who was promoting his new book on Q school. "Q School" is how professional golfers qualify for the PGA tour. This isn't just a little tournament. The humor and drama in some of these stories surprised me. By nearly everyone's standard, all of these players are incredibly good, much, much better than you and especially me. What separates those who make it to the Tour from those who don't?

Feinstein said that professional golfers acknowledge there are five levels of quality in the world of golf:

  1. hitting balls on the range, well
  2. playing a round of golf, well
  3. playing well in a tournament
  4. playing well in a tournament under pressure, with a chance to win
  5. playing well in a major under pressure, with a chance to win

What separates those who it make from those who don't is the gap between levels 3 and 4. Then, once folks qualify for the tour, the gap between levels 4 and 5 becomes a hurdle for players who want to be "players", not just the journeymen who make a good living but in the shadows of the best.

I see the same thing in the world of professional tennis. On a smaller stage, I've experienced these levels as a competitor -- playing chess, competing in various academic venues, and doing academic work.

What does it take to make a step up to the next level? Hard work. Physical work or, in chess and academia, intellectual work. But mostly, it is mental. For most of the guys at Q school,and for many professional golfers and tennis players, the steps from 3 to 4 and from 4 to 5 are more about the mind than the body, more about concentration than skill.

Feinstein related a Tiger Woods statistic that points to this difference in concentration. On the PGA Tour in 2005, Tiger faced 485 putts of 5 feet or less. The typical PGA Tour pro misses an average of 1 such putt each week. Tiger missed 0. The entire season.

Zero is not about physical skill. Zero is about concentration.

This sort of concentration, the icy calm of the great ones, probably demoralizes many other players on the tour, especially the guys trying to make the move from level 4 to level 5. It might well infuriate that poor guy simply trying to qualify for the tour. He may be doing every thing he possibly can to improve his skills, to improve his mental approach. Sometimes, it just isn't enough.

Why did this strike me as relevant to my day job? I listened to the Feinstein interview on the morning I gave my final exam for the semester.

Students understand course material at different levels. That is part of what grades are all about. Many students perform at different levels, on assignments and on exams. At exam time, and especially at final exam time, many students place great hope in the idea that they really do get it, but that they just aren't able to demonstrate it on the exam.

There may be guys in Q School harboring similar hope, but reality for them is simple: if you don't demonstrate, you don't advance.

It's true that exam performance level is often not the best predictor of other kinds of performance in the world, and some students far exceed their academic performance level when they reach industry. But most students' hopes in this regard are misplaced. They would be much better off putting their energy into getting better. Whether they really are better than their exam performance or are in need of better exam performance skills, getting better will serve them well.

But it's more than just exams. There are different levels of understanding in everything we learn, and sometimes we settle for less than we can achieve. That's what my last entry was trying to say -- there is a need to graduate from the level at which one requires external reference to a level at which one has internalized essential knowledge and can bring it to bear when needed.

I am sure someone will point me to Bloom's taxonomy, and it is certainly relevant. But that taxonomy always seems so high-falutin' to me. I'm thinking of something closer to earth, something more concrete in terms of how we learn and use programming languages. For example, there might be five levels of performance with a programming language feature:

  1. recognize an idea in code
  2. program with the idea, using external references
  3. program with the idea, without external reference, but requiring time to "reinvent" the idea
  4. program with the idea, fluently
  5. program with the idea, fluently and under pressure

I don't know if there are five levels here, or if these are the right levels, but they seem a reasonable first cut for Concourse C at Detroit Metro. (This weekend brings the OOPSLA 2007 spring planning meeting in one of North America's great international cities, Montreal.) But this idea of levels has been rolling around my mind for a while now, and this interview has brought it to the top of my stack, so maybe I'll have something more interesting to say soon.

The next step is to think about how all this matters to my students, and to me as an instructor. Knowing about the levels, what should students do? How might I feed this back into how I teach my course and how I evaluate students?

For now, my only advice to students is to do what I hope to do in a week or so: relax for a few minutes at the advent of summer!


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 04, 2007 11:12 PM

Internalized Knowledge and External Triggers

With the end of finals week, I've begun thinking about my experience teaching this semester. I graded the last homework assignment for my course this week and ran across this comment in one of the submissions:

I spent 4 hours or so this past week cobbling bits together, and tried to assemble them into a coherent mess, but failed. All I have are some run-time errors that don't mean much to me. At this point I would rather write the entire interpreter in another language than use Scheme.

The last sentence made me sad, even as I know that not every student will leave the course enamored with what I've taught. I was again struck to receive such a submission with no questions asked, despite e-mail everywhere and plentiful, often empty office hours. I run into this sort of submission more frequently these days.

This term, I've also encountered student time trouble on quizzes more frequently. Even students who seem to understand the material well have run short of time. The standard comment is, "If I just had more time...". Now, if you've had me for class, even this class, you can appreciate the sentiment. But I do have a pretty good sense of what most students have been able to accomplish in the allotted quiz times over past few years in this course. Is time trouble more frequent these days? I think so.

I'm not one who like to talk about students in the "good ol' days". Students were no smarter, no better, and no harder working when I was in school. To think so is usually just selective (and aging) memory. But I do believe that occasionally there are systematic changes in how students behave, and recognizing these changes is important if we intend to teach them -- help them learn -- effectively.

Consider students running short of time on a quiz. I think I understand at least part of the problem now. Students these days need a "crutch": access to reference material. Open notes are an example. Students love the idea of open-book exams. And how do they program? With uninterrupted access to all the reference material they want. The help desk in Dr. Scheme contains everything they need to know about Scheme, and then some. So, there is no need to memorize syntax. If they need help writing a letrec? Type that string into the Help Desk search box, hit return, see the canonical form, and then maybe even copy and paste an example into their code. The web and Google are likewise at their ubiquitous disposal, ready to serve their every need at any moment.

This is how they expect the world to be.

But that isn't how exams work, or job interviews, or most jobs. You have to internalize some knowledge to be effective.

What was it like in the old days? I think just as many students had a similar resistance to avoid internalizing, but we all had fewer alternatives. We didn't have the technology to pull it off, so we adapted. We crammed and memorized. Our only other choice was to find a new major. (And what major didn't require knowing some stuff -- at least any major that also offered the prospects of a paying job?)

Maybe our technology is making us dumber because we become less adaptive to our surroundings. I think that's probably an overstatement. Perhaps even the opposite is true: we are becoming more enabled, by offloading unnecessary data and focusing on more important stuff. I think that's probably an overstatement, too. In any case, the phenomenon is probably something to be aware of.

As an instructor, what is my solution? I can imagine many possibilities. Shorten my exams and quizzes. Allow external support, such open books and notes. Give my exams on-line, in an environment that more closely resembles the students' working environment. Educate them about the way the world works, and try to help them more directly to move toward a world in which they internalize basic knowledge and skills. I think the last of these is the right answer, and worthy of the effort, even if it turns out to be a futile attempt. But whatever I try, being aware of my students's more fundamental dependence on extrenal triggers will help me out in future courses. If I take it into account...


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 03, 2007 2:52 PM

The End Is Near

The end of our academic year has begun. The first task I face this time of year is reviewing activity reports submitted by faculty to document their teaching, research, and service activities since last May. The proximate cause of my reading this reports is that I must complete a salary worksheet for the Dean. The worksheet shows each faculty member's current salary, plus contractually mandated raises for next year. My contribution is a single entry for each person: a "merit adjustment".

I think that Eric Sink has expressed how I feel about this task as well as I could, and certainly more colorfully:

For most people, the touchiest and most sensitive topics are money and sex. I'm not expected to decide how often everybody gets laid. Why do I have to decide how much everybody gets paid?"

The academic world is a bit different than the ISV world in which Eric lives. Once a faculty member earns tenure at the time of promotion to associate professor, these annual reviews have little to do with whether or not a faculty member will have a job next year. At some universities, department heads and deans may have access to a larger pool of money for merit pay, but here faculty salary increases are driven more by across-the-board money than by merit. And in the end, I think salary is a relatively small part of why most faculty members are at a university, or even a particular university.

So. My discretion has relatively little effect on faculty members' salaries, but I still feel funny exercising it. Allocation decisions are inherently subjective. Weighing contributions across the teaching, research, and service categories are difficult, and sometimes weighing contributions within the categories is no easier. I am a strong believer in recognizing differences, but that is easier said than done.

The task is complicated by an interesting effect I've noticed over the last fifteen years. When the pool of money is small, it has little use as an incentive, even for folks who may be motivated in this way. But even then it has powerful possibilities as a demotivator. The effect of a $0 in that slot, or a token value that is emotionally equivalent to a $0, is remarkable -- and sad. So, while the absolute effect of my salary decisions are quite small, but their relative effect can be much larger.

I don't have any great wisdom or insight into the process yet. In fact, I'm pretty sure that I can improve upon what I've done the last two years. Eric Sink's article does a nice job exploring the space and putting me into the right frame of mind for doing the task (with appropriate context-specific modifications to all the details, of course). If I remain head for much longer, this is one area that I would like to think harder about.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading