June 30, 2009 2:12 PM

The Last Monday in June

... is -- since 2003 -- the traditional kick-off to my fall marathon season. In this short-lived tradition, I run the half-marathon at our annual city festival on the last Sunday in June (*), then sit down on Monday and plan my training schedule for an October marathon. Last year interrupted the tradition with its months of whatever is wrong with me physically, but my mind is tuned to the rhythm.

This year, I ran a half marathon that went better than expected in early May. But I have not been able to raise my weekly mileage beyond 28-30 or so since then, due to fatigue, and so opted for the 5K at the city festival. I had run a 5K three months ago, on very little base mileage, and done better than expected. That race led me to have higher hopes this time out. Not so. The conditions were different, though. First, I had unwisely run hard on Friday morning, PRing my usual 5-mile route (such as that is in these days of low mileage and slower paces). Second, this is a much bigger race, and I got caught in the crowd for half a mile and only felt free at about the first mile marker. I ended up running a faster time, but not by much, though my last 2.1 miles took only 15:25 or so. (In the previous 5K, I had faded badly in the last mile after running the first two miles in 14:43...)

Where does that leave me? All I know is that when it came time on Sunday for the half-marathoners to turn left and the 5Kers to go straight, I really wanted to turn -- tired legs and no preparation notwithstanding. Mentally, I would like to give a fall race a try. Will my body let me?

I took today off to let my quads rest. I think that I will try to finish out the rest of the week with a no-frills running schedule: 5 miles on each of Wednesday, Thursday, and Friday, and then 12 miles on Sunday. If next Monday finds me well enough to contemplate more, I will treat this week as Week One in low-mileage, low-pressure training plan. I'll design something that gets me ready for a marathon in late October or early November. Then I'll see where it takes me.

If not, then maybe I'll shoot for a couple of fall halfs and see whet my body allows me. My mind is saying, "Go."

~~~~

(*) Of course, if June 30 is the last Sunday in June, then my training season starts on the first Monday in July. This hasn't happened to me yet, only because 2008 was a leap year! The contingency may have occurred to you if you have ever tried to write code that manages all of the complexity of dates correctly. Or perhaps you have given students programming assignments that bump into such rules. Writing test cases for code exposes all of the nooks and crannies of aan algorithm.


Posted by Eugene Wallingford | Permalink | Categories: Running

June 26, 2009 4:01 PM

The Why of X

Where did the title of my previous entry come from? Two more quick hits tell a story.

Factoid of the Day

On a walk the other night, my daughter asked why we called variables x. She is reviewing some math this summer in preparation to study algebra this fall. All I could say was, "I don't know."

Before I had a chance to look into the reason, one explanation fell into my lap. I was reading an article called The Shakespeare of Iran, which I ran across in a tweet somewhere. And there was an answer: the great Omar Khayyam.

Omar was the first Persian mathematician to call the unknown factor of an equation (i.e., the x) shiy (meaning thing or something in Arabic). This word was transliterated to Spanish during the Middle Ages as xay, and, from there, it became popular among European mathematicians to call the unknown factor either xay, or more usually by its abbreviated form, x, which is the reason that unknown factors are usually represented by an x.

However, I can't confirm that Khayyam was first. Both Wikipedia and another source also report the Arabic language connection, and the latter mentions Khayyam, but not specifically as the source. That author also notes that "xenos" is the Greek word for "unknown" and so could be the root. However, I also haven't found a reference for this use of x that predates Khayyam, either. So may be.

My daughter and I ended up with as much of a history lesson as a mathematical terminology lesson. I like that.

Quote of the Day

Yesterday afternoon, the same daughter was listening in on a conversation between me and a colleague about doing math and science, teaching math and science, and how poorly we do it. After we mentioned K-12 education and how students learn to think of science and math as "hard" and "for the brains", she joined the conversation with:

Don't ask teachers, 'Why?' They don't know, and they act like it's not important.

I was floored.

She is right, of course. Even our elementary school children notice this phenomenon, drawing on their own experiences with teachers who diminish or dismiss the very questions we want our children to ask. Why? is the question that makes science and math what they are.

Maybe the teacher knows the answer and doesn't want to take the time to answer it. Maybe she knows the answer but doesn't know how to answer it in a way that a 4th- or 6th- or 8th-grader can understand. Maybe he really doesn't know the answer -- a condition I fear happens all too often. No matter; the damage is done when the the teacher doesn't answer, and the child figures the teacher doesn't know. Science and math are so hard that the teacher doesn't get it either! Better move on to something else. Sigh.

This problem doesn't occur only in elementary school or high school. How often do college professors send the same signal? And how often do college professors not know why?

Sometimes, truth hits me in the face when I least expect it. My daughters keep on teaching me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

June 25, 2009 9:48 PM

X of the Day

Quick hits, for different values of x, of course, but also different values of "the day" I encountered them. I'm slow, and busier than I'd like.

Tweet of the Day

Courtesy of Glenn Vanderburg:

Poor programmers will move heaven and earth to do the wrong thing. Weak tools can't limit the damage they'll do.

Vanderburg is likely talking about professional programmers. I have experienced this truth when working with students. At first, it surprised me when students learning OOP would contort their code into the strangest configurations not to use the OO techniques they were learning. Why use a class? A fifty- or hundred-line method will do nicely.

Then, students learning functional programming would seek out arcane language features and workarounds found on the Internet to avoid trying out the functional patterns they had used in class. What could have been ten lines of transparent Scheme code in two mutually recursive functions became fifteen or more of the most painfully tortured C code wrapped in a thin veil of Scheme.

I've seen this phenomenon in other contexts, too, like when students take an elective course called Agile Software Development and go out of their way to do "the wrong thing". Why bother with those unit tests? We don't really need to try pair programming, so we? Refactor -- what's that?

This feature of programmers and learners has made me think harder trying to help them see the value in just trying the techniques they are supposed to learn. I don't succeed as often as I'd like.

Comic of the Day

Hammock dwellers, unite!

2009-06-23 Wizard of Id on professors

If only. If only. When does summer break start?


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

June 24, 2009 8:13 AM

Brains, Patterns, and Persistence

I like to solve the Celebrity Cipher in my daily paper. Each puzzle is a mixed alphabet substitution cipher on a quote by someone -- a "celebrity", loosely considered -- followed by the speaker's name, sometimes prefixed with a title or short description. Lately I've been challenging myself to solve the puzzle in my head, without writing any letters down, even once I'm sure of them. Crazy, I know, but this makes the easier puzzles more challenging now that I have gotten pretty good at solving them with pen in hand.

(Spoiler alert... If you like to do this puzzle, too, and have not yet solved the June 22 cipher, turn away now. I am about to give the the answer away!)

Yesterday I was working on a puzzle, and this was the speaker phrase:

IWHNN TOXFZRXNYHO NXKJHSSA YXOYXEBUHO

I had looked at the quote itself for a couple of minutes and so was operating on an initial hypothesis that YWH was the word the. I stared at the speaker for a while... IWHNN would be IheNN. Double letters to end the third word, which is probably the first name. N could be s, or maybe l. s... That would be the first letter of the first name.

And then I saw it, in whole cloth:

Chess grandmaster Savielly Tartakower

Please don't think less of me. I'm not a freak. Really.

a picture of Savielly Tartakower

How very strange. I have no special mental powers. I do have some experience solving these puzzles, of course, but this phrase is unusual both in the prefix phrase and in the obscurity of the speaker. Yes, I once played a lot of chess and did know of Tartakower, a French-Polish player of the early 20th century. But how did I see this answer?

The human brain amazes me almost every day with its ability to find, recognize, and impose patterns on the world. Practice and exposure to lots and lots of data is one of the ways it learns these patterns. That is part of how I am able to solve these ciphers most days -- experience makes patterns appear to me, unbidden by conscious thought. There may be other paths to mastery, but I know of no other reliable substitute for practice.

What about the rest of the puzzle? From the letter pairs in the speaker phrase, I was able to reconstruct the quote itself with little effort:

Victory goes to the player who makes the next-to-last mistake.

Ah, an old familiar line. If we follow this quote to its logical conclusion, it offers good advice for much of life. You never know which mistake will be the next-to-last, or the last. Keep playing to win. If you learn from your mistakes, you'll start to make fewer, which increases the probability that your opponent will make the last mistake of the game.

Even when in non-adversarial situations, or situations in which there is no obvious single adversary, this is a good mindset to have. People who embrace failure persist. They get better, but perhaps more importantly they simply survive. You have to be in the game when your opportunity comes -- or when your opponent makes the ultimate mistake.

Like so many great lines, Tartakower's is not 100% accurate in all cases. As an accomplished chessplayer, he certainly knew that the best players can lose without ever making an obvious mistake. Some of my favorite games of all time are analyzed in My Sixty Memorable Games, by Bobby Fischer himself. It includes games in which the conquered player never made the move that lost. Instead, the loser accreted small disadvantages, or drifted off theme, and suddenly the position was unfavorable. But looking back, Fischer could find no obvious improvement. Growing up, this fascinated me -- the loser had to make a mistake, right? The winner had make a killer move... Perhaps not.

Even still, the spirit of Tartakower's advice holds. Play in this moment. You never know which mistake will be the next-to-last, or the last. Keep playing.

At this time of year, when I look back over the past twelve months of performing tasks that do not come naturally to me, and looking ahead to next year's vision and duties, this advice gives me comfort.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 17, 2009 9:48 PM

Another Connection to Journalism

a picture of Dave Winer

While reading about the fate of newspapers prior to writing my recent entry on whether universities are next, I ran across a blog entry by Dave Winer called If you don't like the news.... Winer had attended a panel discussion at the UC-Berkeley school of journalism. After hearing what he considered the standard "blanket condemnation of the web" by the journalists there, he was thinking about all the blogs he would love to have shown them -- examples of experts and citizens alike writing about economics, politics, and the world; examples of a new sort of journalism, made possible by the web, which give him hope for the future of ideas on the internet.

Here is the money quote for me:

I would also say to the assembled educators -- you owe it to the next generations, who you serve, to prepare them for the world they will live in as adults, not the world we grew up in. Teach all of them the basics of journalism, no matter what they came to Cal to study. Everyone is now a journalist. You'll see an explosion in your craft, but it will cease to be a profession.

Replace "journalism" with "computer science", and "journalist" with "programmer", and this statement fits perfectly with the theme of much of this blog for the past couple of years. I would be happy to say this to my fellow computer science educators: Everyone should now be a programmer. We'll see an explosion in our craft.

Will programming cease to be a profession? I don't think so, because there is still a kind and level of programming that goes beyond what most people will want to do. Some of us will remain the implementors of certain tools for others to use, but more and more we will empower others to make the tools they need to think, do, and maybe even play.

Are academic computer scientists ready to make this shift in mindset? No more so than academic journalists, I suspect. Are practicing programmers? No more so than practicing journalists, I suspect.

Purely by happenstance, I ran across another quote from Winer this week, one that expresses something about programming from the heart of a programmer:

i wasn't born a programmer.
i became one because i was impatient.
-- @davewiner

I suspect that a lot of you know just what he means. How do we cultivate the right sort of impatience in our students?


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development

June 13, 2009 7:16 PM

Agile Moments While Reading the Newspaper

The first: Our local paper carries a parenting advice column by John Rosemond, an advocate of traditional parenting. In Wednesday's column, a parent asked how to handle a child who refuses to eat his dinner. Rosemond responded that the parents should calmly, firmly, and persistently expect the child to eat the meal -- even if it meant that the child went hungry that night by refusing.

[Little Johnny] will survive this ordeal -- it may take several weeks from start to finish -- with significantly lower self-esteem and a significantly more liberal palette, meaning that he will be a much happier child.

If you know Rosemond, you'll recognize this advice.

I couldn't help thinking about what happens when we adults learn a new programming style (object-oriented or functional programming), a new programming technique (test-driven development, pair programming), or even a new tool that changes our work flow (say, SVN or JUnit). Calm, firm, persistent self-discipline or coaching are often the path to success. In many ways, Rosemond's advice works more easily with 3- or 5-year-olds than college students or adults, because the adults have the option of leaving the room. Then again, the coach or teacher has less motivation to ensure the change sticks -- that's up to the learner.

I also couldn't help thinking how often college students and adults behave like 3- and 5-year-olds.

The second: Our paper also carries a medical advice column by a Dr. Gott, an older doctor who harkens back to an older day of doctor-patient relations. (There is a pattern here.) In Wednesday's column, the good doctor said about a particular diagnosis:

There is no laboratory or X-ray test to confirm or rule out the condition.

My first thought was, well, then how do we know it exists at all? This a natural reaction for a scientist -- or pragmatist -- to have. I think this means that we don't currently have a laboratory or X-ray test for the presence or absence of this condition. Or there may be another kind of test that will tell us whether the condition exists, such as a stress tests or an MRIs.

Without any test, how can we know that something is? We may find out after it kills the host -- but then we would need a post-mortem test. While the patient lives, there could be a treatment regimen that works reliably in face of the symptoms. This could provide the evidence we need to say that a particular something was present. But if the treatment fails, can we rule out the condition? Not usually, because there are other reasons that the treatment fails.

We face a similar situation in software with bugs. When we can't reproduce a bug, at least not reliably, we have a hard time fixing it. Whether we know the problem exists depends on which side of the software we live... If I am the user who encounters the problem, I know it exists. If I'm the developer, then maybe I don't. It's easy for me as developer to assume that there is something wrong with the user, not my lovingly handcrafted code. When the program involves threading or a complex web of interactions among several systems, we are more inclined to recognize that a problem exists -- but which problem? And where? Oh, to have a test... I can only think of two software examples of reliable treatment regimens that may tell us something was wrong: rebooting the machine and reinstalling the program. (Hey to Microsoft.). But those are such heavy-handed treatments that they can't give us much evidence about a specific bug.

There is, of course, the old saying of TDD wags: Code without a test doesn't exist. Scoff at that if you want, but it is a very nice guideline to live by.

To close, here are my favorite new phrases from stuff I've been reading:

Expect to see these jewels used in an article sometime soon.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

June 11, 2009 8:24 PM

Revolution Out There -- and Maybe In Here

(Warning: This is longer than my usual entry.)

In recent weeks I have found myself reading with a perverse fascination some of the abundant articles about the future of newspapers and journalism. Clay Shirky's Newspapers and Thinking the Unthinkable has received a deserving number of mentions in most. His essay reminds us, among other things, that revolutions change the rules that define our world. This means that living through a revolution is uncomfortable for most people -- and dangerous to the people most invested in the old order. The ultimate source of the peril is lack of imagination; we are so defined by the rules that we forget they are not universal laws but human constructs.

I'm not usually the sort of person attracted to train wrecks, but that's how I feel about the quandary facing the newspaper industry. Many people in and out of the industry like to blame the internet and web for the problem, but it is more complicated than that. Yes, the explosion of information technology has played a role in creating difficulties for traditional media, but as much as it causes the problems, I think it exposes problems that were already there. Newspapers battle forces from all sides, not the least of which is the decline -- or death? -- of advertising, which may soon be known as a phenomenon most peculiar to the 20th century. The web has helped expose this problem, with metrics that show just how little web ads affect reader behavior. It has also simply given people alternatives to media that were already fading. Newspapers aren't alone.

This afternoon, I read Xark's The Newspaper Suicide Pact and was finally struck by another perverse thought, a fear because it hits closer to my home. What if universities are next? Are we already in a decline that will become apparent only later to those of us who are on the inside?

Indications of the danger are all around. As in the newspaper industry, money is at the root of many problems. The cost of tuition has been rising much faster than inflation for a quarter of a century. At my university, it has more than doubled in the 2000s. Our costs, many self-imposed, rise at the same time that state funding for its universities falls. For many years, students offset the gap by borrowing the difference. This solution is bumping into a new reality now, with the pool of money available for student loans shrinking and the precipitous decline in housing equity for many eroding borrowing ability. Some may see this as a good thing, as our students have seen a rapid growth in indebtedness at graduation, outpacing salaries in even the best-paying fields. Last week, many people around here were agog at a report that my state's university grads incur more student loan debt than any other state's. (We're #1!)

Like newspapers, universities now operate in a world where plentiful information is available on-line. Sometimes it is free, and other times its is much less expensive than the cost of taking a course on the subject. Literate, disciplined people can create a decent education for themselves on-line. Perhaps universities serve primarily the middle and lower tier of students, who haven't the initiative or discipline to do it on their own?

I have no numbers to support these rash thoughts, though journalists and others in the newspaper industry do have ample evidence for fear. University enrollments depend mostly on the demographics of their main audience: population growth, economics, and culture. Students also come for a social purpose. But I think the main driver for many students to matriculate is industry's de facto use of the college degree as the entry credential to the workplace. In times of alternatives and tight money, universities benefit from industry's having outsourced the credentialing function to them.

The university's situation resembles the newspaper's in other ways, too. We offer a similar defense of why the world needs us: in addition to creating knowledge, we sort it, we package it for presentation, and we validate its authenticity and authority. If students start educating themselves using resources freely or cheaply available outside the university, how will we know that they are learning the right stuff? Don't get most academics started on the topic of for-profits like Kaplan University and the University of Phoenix; they are the university's whipping boy. The news industry has one, too: bloggers.

Newspaper publishers talk a lot these days about requiring readers to pay for content. In a certain sense, that is what students do: pay universities for content. Now, though, the web gives everyone access to on-line lectures, open-source lecture notes, the full text of books, technical articles, and ... the list goes on. Why should they pay?

Too many publishers argue that their content is better, more professional, and so stand behind "the reasonable idea that people should have to pay for the professionally produced content they consume". Shirky calls this a "post-rational demand", one that asks readers to behave in a way "intended to restore media companies to the profitability ordained to them by God Almighty" -- despite living in a world where such behaviors are as foreign as living in log cabins and riding horses for transportation. Is the university's self-justification as irrational? Is it becoming more irrational every year?

Some newspapers decide to charge for content as a way to prop up their traditional revenue stream, print subscriptions. Evidence suggest that this not only doesn't work (people inclined to drop their print subscriptions won't be deterred by pay walls) but that it is counter-productive: the loss of on-line visitors causes a decline in web advertising revenue that is much greater than the on-line reader revenue earned. Again, this is pure speculation, but I suspect that if universities try to charge for their on-line content they will see similar results.

The right reason to charge for on-line content is to create a new revenue stream, one that couldn't exist in the realm of print. This is where creative thinking will help to build an economically viable "new media". This is likely the right path for universities, too. My oldest but often most creative-thinking colleague has been suggesting this as a path for my school to consider for a few years. My department is working on one niche offering now: on-line courses aimed at a specific audience that might well take them elsewhere if we don't offer them, and who then have a smoother transition into full university admission later. We have other possibilities in mind, in particular as part of a graduate program that already attracts a large number of people who work full time in other cities.

But then again, there are schools like Harvard, MIT, and Stanford with open course initiatives, placing material on-line for free. How can a mid-sized, non-research public university compete with that content, in that market? How will such schools even maintain their traditional revenue streams if costs continue to rise and high quality on-line material is readily available?

In a middle of a revolution, no one knows the right answers, and there is great value in trying different ideas. Most any school can start with the obvious: lectures on-line, increased use of collaboration tools such as wikis and chats and blogs -- and Twitter and Facebook, and whatever comes next. These tools help us to connect with students, to make knowledge real, to participate in the learning. Some of the obvious paths may be part of the solution. Perhaps all of them are wrong. But as Shirky and others tell us, we need to try all sorts of experiments until we find the right solution. We are not likely to find it by looking at what we have always done. The rules are changing. The reactions of many in the academy tell a sad story. They are dismissive, or simply disinterested. That sounds a lot like the newspapers, too. Maybe people are simply scared and so hole up in the bunker constructed out of comfortable experience.

Like newspapers, some institutions of higher education are positioned to survive a revolution. Small, focused liberal arts colleges and technical universities cater to specific audiences with specific curricula. Of course, the "unique nationals" (schools such as Harvard, MIT, and Stanford) and public research universities with national brands (schools such as Cal-Berkeley and Michigan) sit well. Other research schools do, too, because their mission goes beyond the teaching of undergraduates. Then again, many of those schools are built on an economic model that some academics think is untenable in the long run. (I wrote about that article last month, in another context.)

The schools most in danger are the middle tier of so-called teaching universities and low-grade research schools. How will they compete with the surviving traditional powers or the wealth of information and knowledge available on-line? This is one reason I embrace our president's goal of going from good to great -- focusing our major efforts on a few things that we do really well, perhaps better than anyone, nurturing those areas with resources and attention, and then building our institution's mission and strategy around this powerful core. There is no guarantee that this approach will succeed, but it is perhaps the only path that offers a reasonable chance to schools like ours. We do have one competitive advantage over many of our competitors: enough research and size to offer students a rich learning environment and a wide range of courses of study, but small enough to offer a personal touch otherwise available only at much smaller schools. This is the same major asset that schools like us have always had. When we find ourselves competing in a new arena and under different conditions, this asset must manifest itself in new forms -- but it must remain the core around which we build..

One of the collateral industries built around universities, textbook publishing, has been facing this problem in much the same way as newspapers for a while now. The web created a marketplace with less friction, which has made it harder for them to make the return on investment to which they had grown accustomed. As textbook prices rise, students look for alternatives. Of course, students always have: using old editions, using library copies, sharing. Those are the old strategies -- I used them in school. But today's students have more options. They can buy from overseas dealers. They can make low-cost copies much more readily. Many of my students have begun to bypass the the assigned texts altogether and rely on free sources available on-line. Compassionate faculty look for ways to help students, too. They support old editions. They post lecture notes and course materials on-line. They even write their own textbooks and post them on-line. Here the textbook publishers cross paths with the newspapers. The web reduces entry costs to the point that almost anyone can enter and compete. And publishers shouldn't kid themselves; some of these on-line texts are really good books.

When I think about the case of computer science in particular, I really wonder. I see the wealth of wonderful information available on line. Free textbooks. Whole courses taught or recorded. Yes, blogs. Open-source software communities. User communities built around specific technologies. Academics and practitioners writing marvelous material and giving it away. I wonder, as many do about journalists, whether academics will be able to continue in this way if the university structure on which they build their careers changes or disappears? What experiments will find the successful models of tomorrow's schools?

Were I graduating from high school today, would I need a university education to prepare for a career in the software industry? Sure, most self-educated students would have gaps in their learning, but don't today's university graduates? And are the gaps in the self-educated's preparation as costly as 4+ years paying tuition and taking out loans? What if I worked the same 12, 14, or 16 hours a day (or more) reading, studying, writing, contributing to an open-source project, interacting on-line? Would I be able to marshall the initiative or discipline necessary to do this?

In my time teaching, I have encountered a few students capable of doing this, if they had wanted or needed to. A couple have gone to school and mostly gotten by that way anyway, working on the side, developing careers or their own start-up companies. Their real focus was on their own education, not on the details of any course we set before them.

Don't get me wrong. I believe in the mission of my school and of universities more generally. I believe that there is value in an on-campus experience, an immersion in a community constructed for the primary purpose of exploring ideas, learning and doing together. When else will students have an opportunity to focus full-time on learning across the spectrum of human knowledge, growing as a person and as a future professional? This is probably the best of what we offer: a learning community, focused on ideas broad and deep. We have research labs, teams competing in a cyberdefense and programming contests. The whole is greater than the sum of parts, both in the major and in liberal education.

But for how many students is this the college experience now, even when they live on campus? For many the focus is not on learning but on drinking, social life, video games... That's long been the case to some extent, but the economic model is changing. Is it cost-effective for today's students, who sometimes find themselves working 30 or more hours a week to pay for tuition and lifestyle, trying to take a full load of classes at the same time? How do we make the great value of a university education attractive in a new world? How do we make it a value?

And how long will universities be uniquely positioned to offer this value? Newspapers used to be uniquely positioned to offer a value no one else could. That has changed, and most in the industry didn't see it coming (or did, and averted their eyes rather than face the brutal facts).

I'd like also to say that expertise distinguishes the university from its on-line competition. That has been true in the past and remains true today, for the most part. But in a discipline like computer science, with a large professional component attracts most of its students, where grads will enter software development or networking... there is an awesome amount of expertise out in the world. More and more of those talented people are now sharing what they know on-line.

There is good news. Some people still believe in the value of a university education. Many students, and especially their parents, still believe. During the summer we do freshman orientation twice a week, with an occasional transfer student orientation thrown into the mix. People come to us eagerly, willing to spend out of their want or to take on massive debts to buy what we sell. Some come for jobs, but most still have at least a little of the idealism of education. When I think about their act in light of all that is going on in the world, I am humbled. We owe them something as valuable as what they surrender. We owe them an experience befitting the ideal. This humbles me, but it also Invigorates and scares me, too.

This article is probably more dark fantasy than reality. Still, I wonder how much of what I believe I really should believe, because it's right, and how much is merely a product of my lack of imagination. I am certain that I'm living in the middle of a revolution. I don't know how well I see or understand it. I am also certain of this: I don't want someone to be writing this speech about universities in a few years with me in its clueless intended audience.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 05, 2009 3:25 PM

Paying for Value or Paying for Time

Brian Marick tweeted about his mini-blog post Pay me until you're done, which got me to thinking. The idea is something like this: Many agile consultants work in an agile way, attacking the highest-value issue they can in a given situation. If the value of the issues to work on decreases with time, there will come a point at which the consultant's weekly stipend exceeds the value of the work he is doing. Maybe the client should stop buying services at that point.

My first thought was, "Yes, but." (I am far too prone to that!)

First, the "yes": In the general case of consulting, as opposed to contract work, the consultant's run will end as his marginal effect on the company approaches 0. Marick is being honest about his value. At some point, the value of his marginal contribution will fall below the price he is charging that week. Why not have the client end the arrangement at that point, or at least have the option to? This is a nice twist on our usual thinking.

Now for the "but". As I tweeted back this feels a bit like Zeno's Paradox. Marick the consultant covers not half the distance from start to finish each week, but the most valuable piece of ground remaining. With each week, he covers increasingly less valuable distance. So our consultant, cast in the role of Achilles, concedes the race and says, okay, so stop paying me.

This sounds noble, but remember: Achilles would win the race. We unwind Zeno's Paradox when we realize that the sum of an infinite series can be a finite number -- and that number may be just small enough for Achilles to catch the tortoise. This works only for infinite series that behave in a particular way.

Crazy, I know, but this is how the qualification of the "yes" arose in my mind. Maybe, the consultant helps to create a change in his client that changes the nature of the series of tasks he is working on. New ideas might create new or qualitatively different tasks to do. The change created may change the value of an existing task, or reorder the priorities of the remaining tasks. If the nature of the series changes, it may cause the value of the series to change, too. If so, then the client may well want to keep the consultant around, but doing something different than the original set of issues would have called for.

Another thought: Assume that the conditions that Marick described do hold. Should the compensation model be revised? He seems to be assuming that the consultant charges the same amount for each week of work, with the value of the tasks performed early being greater than that amount and the value of the tasks performed later being less than that amount. If that is true,then early on the consultant is bringing in substantially more value than he costs. If the client pulls the plug as soon as the value proposition turns in its favor, then the consultant ends up receiving less than the original contract called for yet providing more than average value for the time period. If the consultant thinks that is fair, great. What if not? Perhaps the consultant should charge more in the early weeks, when he is providing more value, than in later week? Or maybe the client could pay a fee to "buy out" the rest of the contract? (I'm not a professional consultant, so take that into account when evaluating my ideas about consultant compensation...)

And another thought: Does this apply to what happens when a professor teaches a class? In a way, I think it does. When I introduce a new area to students, it may well be the case that the biggest return on the time we spend (and the biggest bang for the students' tuition dollars) happens in the first weeks. If the course is successful, then most students will become increasingly self-sufficient in the area as the semester goes on. This is more likely the case for upper-division courses than for freshmen. What would it be like for a student to decide to opt out of the course at the point where she feels like she has stopped receiving fair value for the time being spent? Learning isn't the same as a business transaction, but this does have an appealing feel to it.

The university model for courses doesn't support Marick's opt-out well. The best students in a course often reach a point where they are self-sufficient or nearly so, and they are "stuck". The "but" in our teaching model is that we teach an audience larger than one, and the students can be at quite different levels in background and understanding. Only the best students reach a point where opting out would make sense; the rest need more (and a few need a lot more -- more than one semester can offer!).

The good news is that the unevenness imposed by our course model doesn't hurt most of those best students. They are usually the ones who are able to make value out of their time in the class and with the professor regardless of what is happening in the classroom. They not only survive the latency, but thrive by veering off in their own direction, asking good questions and doing their own programming, reading, thinking outside of class. This way of thinking about the learning "transaction" of a course may help to explain another class of students. We all know students who are quite bright but end up struggling through academic courses and programs. Perhaps these students, despite their intelligence and aptitude for the discipline, don't have the skills or aptitude to make value out of the latency between the point they stop receiving net value and the end of the course. This inability creates a problem for them (among them, boredom and low grades). Some instructors are better able to recognize this situation and address it through one-on-one engagement. Some would like to help but are in a context that limits them. It's hard to find time for a lot of one-on-one instruction when you teach three large sections and are trying to do research and are expected to meet all of the other expectations of a university prof.

Sorry for the digression from Marick's thought experiment, which is intriguing in its own setting. But I have learned a lot from applying agile development ideas to my running. I have found places where the new twist helps me and others where the analogy fails. I'm can't shake the urge to do the same on occasion with how we teach.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Software Development, Teaching and Learning

June 04, 2009 8:38 PM

The Next 700 ...

I have regarded it as the highest goal
of programming language design to enable
good ideas to be elegantly expressed.
-- Tony Hoare, The Emperor's Old Clothes

One of computing's pioneers has passed. Word came this morning from Queen Mary, University of London, that Peter Landin died yesterday. Landin is perhaps not as well known on this side of the pond as he should be. His university web page lists his research interests as "logic and foundations of programming; theoretical computations; programming foundations; computer languages", but it doesn't say much about his seminal contributions in those areas. Nor does it mention his role helping to establish computer science as a discipline in the UK.

In the world of programming languages, though, Landin is well-known. He was one of the early researchers influenced by McCarthy's Lisp, and he helped to develop the connection between the lambda calculus and the idea of a programming language. In turn, He influenced Tony Hoare and Hoare's creation of Quicksort. This followed his involvement in the design of Algol 60, which introduced recursion to a wider world of computer scientists and programmers. Algol 60 was in many ways the alpha of modern programming languages.

I am probably like many computer scientists in having read only one of Landin's papers, The Next 700 Programming Languages. I remember first running across this paper while studying functional languages a decade or so ago. Its title intrigued me, and its publication date -- July 1965 -- made me wonder just what he could mean by it. I was blown away. He distinguished among four different levels of features that denote a language: physical, logical, abstract, and "applicative expressions". The last of these abstracted even more grammatical detail away from what many of us tend to think of as the abstract tree of a program. He also wrote about the role of where clauses in specifying local bindings natural just as mathematicians long have.

Before reading this paper, I had never seen a discussion of the physical appearance of programs written in a language. Re-reading the paper now, I had forgotten that Landin used an analogy to soccer, the off-side rule, to define a class of physical appearances in which indentation mattered. After we as a discipline left the punch card behind, for many years it was unstylish at best, and heresy at worst, for whitespace to matter in programming language design. These days, languages such as Python and Haskell have sidestepped this tradition and put whitespace to good use.

On a lighter note, Landin also coined the term syntactic sugar, a fact I learned only while reading about Landin after his passing. What whimsy! A good name is sometimes worth as much as a good idea. I join Hoare in praising Landin for showing him the elegance of recursion, but also reserve a little laud for giving us such a wonderful term to use when talking about the elegance of small languages.

This isn't quite an END DO moment for me. I heard about John Backus throughout my undergrad and graduate careers, and his influence on the realization of the compiler has affected me deeply for as long as I've been a student of computer science. I came to Landin later, through his theoretical contributions. Yet it's interesting that they shared a deep appreciation for functional languages. For much of the discipline's history, functional programming has remained within the province of the academic, looked upon disdainfully by practitioners as impractical, abstract, and a distraction from the real business of programming. Now the whole programming world is atwitter with new languages, and new features for old languages, that draw on the abstraction, power, and beauty of functional programming. The world eventually catch up with ideas of visionaries.

An e-mail message from Edmund Robinson shared the sad news of Landin's passing with many people. In it, Robinson wrote:

The ideas in his papers were truly original and beautiful, but Peter never had a simplistic approach to scientific progress, and would scoff at the idea of individual personal contribution.

Whatever Landin's thoughts about "the idea of individual personal contribution", the computing world owes him a debt for what he gave us. Read "The Next 700 Programming Languages" in his honor. I am reading it again and plan next to look into his other major papers, to see what more he has to teach me.


Posted by Eugene Wallingford | Permalink | Categories: Computing