In Exercise 4, we considered some of Norman's ideas about the interface between software and users. In Exercise 5, you are exploring the interface between man and machine in terms of your own experience.
Have technological advances made many or all of the concerns that Norman expressed in his paper "go away"? I don't think so. Some of you have concerns about the interfaces to devices that you considered in Exercise 5, and most of them are not nearly as complex as most software for a digital computer! Last fall, we saw an example of how usability matters even for non-electronic devices, in the problems with ballot design and voting machine technology that faced Florida in the presidential election.
Usability and interfaces are about people, not machines.
An example of "the interface getting in the way of the task" from my experience: releasing a CS student's advising hold.
An example of a system more complex than it needs to be, with an interface that contributes to the problem, again from my experience: the voice-mail system on campus.
An example of a system I wish I didn't have to be "trained" to use: our photocopier.
You have begun to think about the responsibilities that a system developer (both software and hardware) has to users of the system. You have read the first four newspaper articles in Pam Pulitzer's The Killer Robot Papers.
Work in teams of three or four based on the number in the upper right-hand corner of this page.
Was Bart Matthews' faith blind?
The institution for which the programmer works controls many variables outside the programmer's hands, including testing the software and the programmer's working conditions. For example, was the programmer to pressured to meet a deadline? What is the programmer's professional duty when pressured to release a product that is not ready?
The client institution also controls many variables outside the programmer's hands, including training of the user (programmer's institution, too?) and the user's working conditions.
Still, the programmer does control some important factors: Is he competent in the domain being programmed? If not, did he seek help from appropriate experts at appropriate times? Was he aware of a defect in the program? What was his intent?
Don't confuse assigning blame to others who share responsibility with reducing or eliminating the potential liability of the programmer. In the case of malicious intent on the part of the programmer, assigning some blame to the testing team may make sense in some cases, but does it mean that the programmer escapes some or all of his malicious intent and execution? I hope not. Once we admit liability in that case, we face the real question here: in the absence of malicious intent, what other factors signal programmer liability?
Most of you didn't seem to buy the "accidental death by firearm" analogy. Give it some thought, though, because I think that the issue is less obvious than it seems to you. Not that I think that it is a great analogy, or that it holds all of the time--I just think that you would have to work harder to defeat such an argument than you may have here. One idea: What is the role analogous to the computer programmer in the firearm scenario? What responsibilities does the analogous person have in the firearm scenario, and why might the programmer be assigned similar responsibilities in the robot scenario?