Session 8

Uncertainty in Design


810:171

Software Systems


Recap Exercise 11: Uncertainty in the Design Process

In Exercise 10, we began to explore how we could take interface design issues into account during the software development life cycle. We found that interface issues require, among other things, early and frequent contact with live users. In Exercise 11, you are exploring a basic question that faces all designers: when do I know that my product is good enough?


Summary from Exercise 11

Is it okay for software to have flaws?

Arguments in favor of 100% correctness usually include appeals to mathematical proofs of correctness. Besides, who cares about snow, or Ivory soap?

Arguments against the idea 100% correctness usually focus on the complex environments in which software systems often operate, which can create an infinite number of unique inputs to the system. Our focus should be on graceful acceptance and recovery from error, not avoidance.

Class discussion considered the continuum between doing nothing and doing a perfect job. Clearly, most of our efforts will fall somewhere in between. Some companies have a reputation for releasing products relatively early on the continuum -- and yet some of you thought this a wise strategy. It gets your product to the market earlier, perhaps first, and takes advantage of massive parallelism in testing the software. Such a strategy is often more cost-effective than doing in-house testing with a small team, and it may even benefit the customer who has a (nearly correct) product sooner and can thus begin the long road to integrating it with other software.

In what circumstances of the theory is Ivory Snow a pragmatic one, and in what circumstances is it merely a rationalization of sloppiness?

Software products for which Ivory Snow seems to make the most sense: entertainment packages; anything where the cost of failure is low (say, word processing). We must keep in mind that the person to judge cost of failure is the user, not the developer. So we need to ask ourselves: Who are the users? How do they judge and handle failure?

Software products for which Ivory Snow does not make sense: any software with life-critical functionality; complex systems in which a small error can propagate into a large error, especially when the observable failure is no longer proximate to the cause of the original error; military applications; financial applications.

Another interesting point that students have raised in the past: The theory makes sense when the developer controls the environment in which the software will run, say, custom hardware. It doesn't make sense when the environment is out of the developer's control. (How often do you imagine that is?

The production of software is different from the production of most mechanical goods. For one thing, errors in a program are replicated every time the program is copied. (Errors in typical production can also be replicated, in the case of a bad mold, but those aren't the real problem. The real problem is statistical variation in the replication of the process.) Software is different from a lot of goods in a lot of ways, which makes me skeptical of the premise underlying "software engineering"...

The bottom line for some of you seemed to be: You can't make perfect software, so weigh the costs of failure against the costs of improving the failure rate. (Someone else equated this to joining the Mafia. :-)


On Perfection

Epstein uses a quote nearly identical to one I frequently use: "Perfection is the enemy of the good." It seems to me that Epstein uses it in a pejorative way -- that such a lackadaisical attitude can lead to software failure. I'd like you to think about how one might make this assertion in a serious but pragmatic way. When have we done enough? What standards do we use to make the judgment?

"The perfect is the enemy of the good." By this, people often mean that striving for perfection results in the product never being finished. (There is always one more improvement to make...) But I think that there are two other senses in which this adage is true.

Design is about trade-offs. Optimizing any one goal is not usually a good idea, even when possible.


Exercise 12 : Critiquing Interface Analyses

Goals

  1. To understand better the theories, principles, and heuristics underlying the development of good user interfaces that you read about in Chapters 1 and 2 of Shneiderman.
  2. To understand better how to write a good critique.

Tasks

At the End

  1. Your group submits a package containing your Guidelines, your list of "best things", your lists from Task 1, and the critiques each of you brought to class.
  2. Several groups will present their Guidelines to the class.


Summary from Exercise 12

"But I study computer science. I don't want to write." This common misperception has grounded many a potential programmer. In your professional career, you will write constantly:

... all of which will be read by people. Many will be read by non-computer scientists, too, so you can't just aim at a technical audience. You should take seriously the part of your profession that calls for you to be a writer.

I don't do this to you without doing it to myself: [my experience at the Pattern Languages of Programming conference] and later this semester in this class!

Finally, students always ask me for examples of past papers, exams, programs, etc., so that they can prepare before doing their own. This exercise goes the usual answer one better by letting you review other student work in real time, consider what makes a paper stronger or weaker, and apply these ideas to your own work.

----------

Guidelines should be active advice whenever possible, not just passive descriptions. "Be clear" sounds nice but gives no guidance at all for how to achieve clarity. "Don't"s can tell someone about a class of problems, but advice that helps the writer avoid a problem by doing something is more useful.

Usually, the vast majority of student comments about others' papers relate to style, and about some rather elementary strengths and weaknesses at that. Obvious advice is especially important when no one seems to be following it! :-) So, in that respect, your comments were helpful reminders of the base-line expectations we all place on papers:

The fact that so many groups made these comments leads me to believe that many of the commentators were probably as guilty of these transgressions as the authors of the reviewed papers. So my advice to you would be, "Physician, heal thyself." Now that you have been reminded of these ideas, try to follwo them as you write papers in the future.

My favorite comment -- perhaps because I frequently find myself saying the same thing -- was "explain, don't just make a claim".


Eugene Wallingford ==== wallingf@cs.uni.edu ==== February 1, 2001