TITLE: Risk in Delivering Software AUTHOR: Eugene Wallingford DATE: May 08, 2007 7:54 PM DESC: ----- BODY:

You ever notice
how anyone driving slower than you is an idiot,
and anyone driving faster than you is a maniac?

-- George Carlin

I spent my time flying back from Montreal reading Bruce Schneier's popular article The Psychology of Security and had a wonderful time. Schneier is doing want any good technologist should do: try to understand how the humans who use their systems tick. The paper made me harken back to my days studying AI and reasoning under uncertainty. One of the things we learned then is that humans are not very good at reasoning in the face of uncertainty. and most don't realize just how bad they are. Schneier studies the psychology of risk and probabilistic reasoning with the intention of understanding how and why humans so often misjudge values and trade-offs in his realm of system security. As a software guy, my thoughts turned in different directions. The result will be a series of posts. To lead off, Schneier describes a couple of different models for how humans deal with risk. Here's the standard story he uses to ground his explanation:
Here's an experiment .... Subjects were divided into two groups. One group was given the choice of these two alternatives:

The other group was given the choice of:

The expected values of A and B are the same, likewise C and D. So we might expect people in the first group to choose A 50% of the time and B 50% of the time, likewise C and D. But some people prefer "sure things", while others prefer to gamble. According to traditional utility theory from economics, we would expect people to choose A and C (the sure things) at roughly the same rate, and B and D (the gambles) at roughly the same rate. But they don't...
But experimental results contradict this. When faced with a gain, most people (84%) chose Alternative A (the sure gain) of $500 over Alternative B (the risky gain). But when faced with a loss, most people (70%) chose Alternative D (the risky loss) over Alternative C (the sure loss).
This gave rise to something called prospect theory, which "recognizes that people have subjective values for gains and losses". People have evolved to prefer sure gains to potential gains, and potential losses to sure losses. If you live in a world where survival is questionable and resources are scarce, this makes a lot of sense. But it also leads to interesting inconsistencies that depend on our initial outlook. Consider:
In this experiment, subjects were asked to imagine a disease outbreak that is expected to kill 600 people, and then to choose between two alternative treatment programs. Then, the subjects were divided into two groups. One group was asked to choose between these two programs for the 600 people:

The second group of subjects were asked to choose between these two programs:

As before, the expected values of A and B are the same, likewise C and D. But in this experiment A==C and B==D -- they are just worded differently. Yet human bias toward sure gains to and potential losses holds true, and we reach an incongruous result: People overwhelmingly prefer Program A and Program D in their respective choices! While Schneier looks at how these biases apply to the trade-offs we make in the world of security, I immediately began thinking of software development, and especially the so-called agile methods. First let's think about gains. If we think not in terms of dollars but in terms of story points, we are in a scenario where gain -- an additive improvement to our situation -- is operative. It would seem that people out to prefer small, frequent releases of software to longer-term horizons. "In our plan, we can guarantee delivery of 5 story points each week, determined weekly as we go along, or we can offer an offer a 60% chance of delivering an average of 5 story points a week over the next 12 months." Of course, "guaranteeing" a certain number of points a week isn't the right thing to do, but we can drive our percentage up much closer to 100% the shorter the release cycle, and that begins to look like a guarantee. Phrased properly, I think managers and developers ought to be predisposed by their psychology to prefer smaller cycles. That is the good bet, evolutionarily, in the software world; we all know what vaporware is. What about losses? For some reason, my mind turned to refactoring here. Now, most agile developers know that refactoring is a net gain, but it is phrased in terms of risk and loss (of immediate development time). Phrased as "Refactor now, or maybe pay the price later," this choice falls prey to human bias preference for potential losses over sure losses. No wonder selling refactoring in these terms is difficult! People are willing to risk carrying design debt, even if they have misjudged the probability of paying a big future cost. Maybe design debt and the prospect of future cost is the wrong metaphor for helping people see the value of refactoring. But there is one more problem: optimism bias. It turns out that people tend to believe that they will outperform other people engaged in the same activity, and we tend to believe that more good will happen to us than bad. Why pay off design debt now? I'll manage the future trajectory of the system well enough to overcome the potential loss. We tend to underestimate both the magnitude of coming loss and the probability of incurring a loss at all. I see this in myself, in many of my students, and in many professional developers. We all think we can deliver n LOC this week even though other teams can deliver only n/2 LOC a week -- and maybe even if we delivered n/2 LOC last week. Ri-i-i-i-ight. There is a lot of cool stuff in Schneier's paper. It offers a great tutorial on the fundamentals of human behavior and biases when reacting to and reasoning about risk and probability. It is well worth a read for how these findings apply in other areas. I plan a few more posts looking at applications in software development and CS education. -----