TITLE: A Good Time to Be Agile AUTHOR: Eugene Wallingford DATE: October 20, 2008 7:31 PM DESC: ----- BODY: In recent weeks, the financial markets of the world have entered "interesting times". There is a great story to tell here about the role that computational models have played in the financial situation we face, but I have been more intrigued by another connection to computing, more specifically to software development. It turns out that these bad times for the economy are a good time to "be agile". Paul Graham writes that this is a good time to start a start-up, in part because a start-up can be more nimble and consume fewer resources than a big software shop. A new product can grow in small steps in a market where resources are limited. Tim Bray expands on that idea in his post, A Good Time for Agility. It may be difficult to get major projects and corresponding big budgets approved in tough times, because most execs will be focused on cost containment and surviving to the quarterly report. But...
The classic Agile approach, where you pick two or three key features, spec'em out with test suites that involve the business side, build'em and test'em, and then think about maybe going back for the next two or three, well, that's starting to look awfully attractive.
Small steps and small up-front expense draw less attention than BDUF and multi-month budgets. And if they lead to concrete, measurable improvements, they have a greater chance of sticking. They might even lead to important new software. The third example I read recently came in a posting to the XP mailing list, the link to which I seem to have lost. The gist was straightforward: The writer worked in the software arm of a major financial institution. Having previously adopted agile practices enabled his shop to shift direction on short notice in response to the market crash. They were not in the middle of a major project with a specific market but in the middle of ongoing creation of loan products. The market for their usual products deteriorated and were able to begin delivering software to a new market relatively quickly. This did not require a new major project, but a twist on their current trajectory. This shouldn't surprise us. Agile approaches allow us to manage risk and change at finer levels of granularity, and in a time of major change the massive dinosaurs will be at a disadvantage against more nimble species. Not all news is so rosy. Bureaucracy can still dominate an environment. Last Friday, an alumnus of our department gave a talk for our current students on how not to stink in industry. His advice was uniformly practical, with many of his technical points reminiscent of The Practical Programmer. But in response to a question about XP and agile practices, his comments were not so positive. He and his team have not yet figured out how to do planning for XP projects, so they are left with tool-specific XP practices such as pair programming and testing early and often. I think that I can help him get a grip on XP-style planning, and offered to do so, but I think his problem goes deeper, to something he has little control over: his team's customers expect big project plans and fixed-price "contracts". This is not a new problem. I was fortunate to visit RoleModel Software back when Ken Auer was first building it, and one topic of discussion within the company was how to educate clients about a new way of planning and budgeting for projects and how to shift the culture when all of its competitors was doing the same old thing. His potential customers had one way of managing the risk they faced, and that was do to things the usual way, even if that led to software that was off target, over budget, and over time. I don't know much more about the issue than this and need to see if anyone has written of positive experiences with it in industry. My former student works for a government agency, which perhaps makes the bureaucracy hard to move by law or administrative rule, rather than by market forces. I feel for him as my department continues to work on outcomes assessment. University mandates are my job's version of "the customer demands a big project plan". (Outcomes assessment is an academic form of unit testing for its curriculum.) As we try to enact an outcomes assessment plan in small steps, we face a requirement to produce a BDUF plan by the end of this year. It's hard to figure out what will work best for us if we have to predict what is best up front. Some will tell us that the academic world understands outcomes assessment well enough to design a suitable plan from scratch, but many of us in the trenches will disagree. It's certainly possible to design from scratch a plan that looks familiar to other people, but who knows if that is what will help this department and this faculty steer its programs most effectively? Complete operational plans of this sort often end up being as useful as many of their software design counterparts. Worse, mandates for such also tend to counterproductive, because when the last big plan fails, and when administration doesn't follow through by holding departments accountable for fixing the plans, faculty learn to mistrust both the mandates and the plans. That is how the link in the previous paragraph can be to a post nearly two years old, yet my department still not have an effective assessment plan in place: the faculty have a hard time justifying spending the time and energy to take on such a big project if it is likely to fail or if not developing a plan has no consequences. I am hoping that we can use the most recent mandate as an opportunity to begin growing an assessment regimen that will serve us well over time. I believe in units tests and continuous feedback. I'm also willing to argue to the administration that an "incomplete" but implementable plan is better than a complete plan with no follow-through. As the software guys are saying, this is a good time to be agile. -----