Session 4

Software and Professional Responsibility


810:171

Software Systems


Recap Exercise 5: Why Interfaces Don't Work

In Exercise 4, we considered some of Norman's ideas about the interface between software and users. In Exercise 5, you are exploring the interface between man and machine in terms of your own experience.


Summary of Discussion

Have technological advances made many or all of the concerns that Norman expressed in his paper "go away"? I don't think so. Some of you have concerns about the interfaces to devices that you considered in Exercise 5, and most of them are not nearly as complex as most software for a digital computer! Last fall, we saw an example of how usability matters even for non-electronic devices, in the problems with ballot design and voting machine technology that faced Florida in the presidential election.

Usability and interfaces are about people, not machines.

An example of "the interface getting in the way of the task" from my experience: releasing a CS student's advising hold.

An example of a system more complex than it needs to be, with an interface that contributes to the problem, again from my experience: the voice-mail system on campus.

An example of a system I wish I didn't have to be "trained" to use: our photocopier.


Exercise: Programmer Responsibility for Software Failure

Context

You have begun to think about the responsibilities that a system developer (both software and hardware) has to users of the system. You have read the first four newspaper articles in Pam Pulitzer's The Killer Robot Papers.

Goals

  1. To attempt to define concretely some of the responsibilities that the developer of a tool has to the user of the tool.
  2. To consider areas in which responsibility is unclear.

Tasks

Work in teams of three or four based on the number in the upper right-hand corner of this page.

  1. Bart Matthews indicated that he "trusts" the robot. Where do you think this trust comes from? Is his trust justified?

  2. Develop a list of criteria that could be used to decide whether a computer programmer should be held accountable for a system failure such as the killer robot. Imagine that an ethics panel intends to use your list to decide whether or not to censure a programmer, or that a legal body intends to use your list to decide whether to press legal charges against a programmer. Be as specific as possible! Feel free to state factors that would mitigate the programmerUs responsibility, but keep in mind that the purpose of the list is to identify when a programmer should be held accountable.

  3. Discuss the analogy between "accidental death by firearm" and "accidental death by poorly programmed robot." In what circumstances does the analogy hold, if ever? In what circumstances does the analogy break down, if ever? Take the question seriouslyQit may be trickier than you think!

At the End

  1. Turn in a sheet containing your list of criteria and your analysis of the legal analogy.
  2. Each group may present a portion of its conclusions.


Summary from Exercise

Was Bart Matthews' faith blind?

The institution for which the programmer works controls many variables outside the programmer's hands, including testing the software and the programmer's working conditions. For example, was the programmer to pressured to meet a deadline? What is the programmer's professional duty when pressured to release a product that is not ready?

The client institution also controls many variables outside the programmer's hands, including training of the user (programmer's institution, too?) and the user's working conditions.

Still, the programmer does control some important factors: Is he competent in the domain being programmed? If not, did he seek help from appropriate experts at appropriate times? Was he aware of a defect in the program? What was his intent?

Don't confuse assigning blame to others who share responsibility with reducing or eliminating the potential liability of the programmer. In the case of malicious intent on the part of the programmer, assigning some blame to the testing team may make sense in some cases, but does it mean that the programmer escapes some or all of his malicious intent and execution? I hope not. Once we admit liability in that case, we face the real question here: in the absence of malicious intent, what other factors signal programmer liability?

Most of you didn't seem to buy the "accidental death by firearm" analogy. Give it some thought, though, because I think that the issue is less obvious than it seems to you. Not that I think that it is a great analogy, or that it holds all of the time--I just think that you would have to work harder to defeat such an argument than you may have here. One idea: What is the role analogous to the computer programmer in the firearm scenario? What responsibilities does the analogous person have in the firearm scenario, and why might the programmer be assigned similar responsibilities in the robot scenario?


Eugene Wallingford ==== wallingf@cs.uni.edu ==== January 18, 2001