TITLE: Skepticism and Experiment AUTHOR: Eugene Wallingford DATE: September 04, 2009 4:09 PM DESC: ----- BODY: There's been a long-running thread on the XP discussion list about a variation of pair programming suggested by a list member, on the basis of an experiment run in his shop and documented in a simple research report. One reader is skeptical; he characterized the research as 'only' an experience report and asked for more evidence. Dave Nicolette responded:
Point me to the study that proves...point me to the study that proves...point me to the study that proves...<yawn> There is no answer that will satisfy the person who demands studies as proof. Studies aren't proof. Studies are analyses of observed phenomena. The thing a study analyzes has already happened before the study is performed. ... The proof is in the doing.
This is one of those issues where each perspective has value, and even dominates the other under the right conditions. Skepticism is good. Asking for data before changing how you, your team, or your organization behaves is reasonable. Change is hard enough on people when it succeeds; when it fails, it wastes time and can dispirit people. At times, skepticism is an essential defense mechanism. Besides, if you are happy with how things work now, why change at all? The bigger the changes, the more costly the change, the more valuable is skepticism. In the case of the XP list discussion, though, we see a different set of conditions. The practice being suggested has been tried at one company, so its research report really is "just" an experience report. But that's fine. We can't draw broadly applicable conclusions from an experiment of one anyway, at least not if we want the conclusions to be reliable. This sort of suggestion is really an invitation to experiment: We tried this, it worked for us, and you might want to try it. If you are dissatisfied with your current practice, then you might try the idea as a way to break out of a suboptimal situation. If you are satisfied enough but have the freedom and inclination to try to improve, then you might try the idea on a lark. When the cost of giving the practice a test drive is small enough, it doesn't take much of a push to do so. What a practice like this needs is to have a lot of people try it out, under similar conditions and different conditions, to find out if the first trial's success indicates a practice that can be useful generally or was a false positive. That's where the data for a real study comes from! This sort of practice is one that professional software developers must try out. I could run an experiment with my undergraduates, but they are not (yet) like the people who will use the practice in industry, and the conditions under which they develop are rarely like the conditions in industry. We could gain useful information from such an experiment (especially about how undergrads work and think), but the real proof will come when teams in industry use the practice and we see what happens. Academics can add value after the fact, by collecting information about the experiments, analyzing the data, and reporting it to the world. That is one of the things that academics do well. I am reminded of Jim Coplien's exhortations a decade and more ago for academic computer scientists to study existing software artifacts, in conjunction with practitioners, to identify useful patterns and document the pattern languages that give rise to good and beautiful programs. While some CS academics continue to do work of this sort -- the area of parallel programming patterns is justifiably hot right now -- I think we in academic CS have missed an opportunity to contribute as much as we might otherwise to software development. We can't document a practice until it has been tried. We can't document a pattern until it recurs. First we do, then we document. This reminds me of a blog entry written by Perryn Fowler about a similar relationship between "best practices" and success:
Practices document success; they don't create it.
A tweet before its time... As I have scanned through this thread on the discussion list, I have found it interesting that some agile people are as skeptical about new agile practices as non-agile folks were (and still are) about agile practices. You would think that we should be practicing XP as it was writ nearly a decade ago. Even agile behavior tends toward calcification if we don't remain aware of why we do what we do. -----