TITLE: Don Norman on Cantankerous Cars AUTHOR: Eugene Wallingford DATE: April 26, 2007 7:05 PM DESC: ----- BODY: Yesterday afternoon I played professional hooky and rode with another professor and a few students to Ames, Iowa, to attend the fourth HCI Forum, Designing Interaction 2007, sponsored by Iowa State's Virtual Reality Applications Center. This year, the forum kicks off a three-day Emerging Technologies Conference that features several big-name speakers and a lot of the HCI research at ISU. Donald Norman I took an afternoon "off" to hear the keynote by Donald Norman, titled "Cautious Cars and Cantankerous Kitchens". It continues the story Norman began telling years ago, from his must-read The Design of Everyday Things to his in-progress The Design of Future Things. "Let me start with a story." The story is about how a time when he is driving, feeling fine, and his wife feels unsafe. He tries to explain to her why everything is okay. "New story." Same set-up, but now it's not his wife reacting to an unsafe feeling, but his car itself. He pays attention. Why does he trust his car more than he trusts his wife? He thinks it's because, with his wife, conversation is possible. So he wants to talk. Besides, he feels in control. When conversation is not possible, and the power lies elsewhere, he acquiesces. But does he "trust"? In Norman's mind, I think the answer is 'no'. Control is important, and not always in the way we think. Who has the most power in a negotiation? Often (Norman said always), it is the person with the least power. Don't send your CEO; send a line worker. Why? No matter how convincing the other sides' arguments are, the weakest participant may well have to say, "Sorry, I have my orders." Or at least "I'll have to check with my boss". It's common these days to speak of smart artifacts -- smart cars, houses, and so on. But the intelligence does not reside in the artifact. It resides in the head of designer. And when you use the artifact, the designer not there with you. The designer would be able to handle unexpected events, even by tweaking the artifact, but the artifact itself can't. "There are two things about unexpected events... They are unexpected. And they always happen." Throughout his talk, Norman compared driving a car to riding a horse, driving a horse and carriage, and then to riding a bike. The key to how well these analogies work or not lies in the three different levels of engagement that a human has: visceral, behavioral, and reflective. Visceral is biological, hard-coded in our brains, and so largely common to all people. It recognizes safe and dangerous situations. Behavioral refers to skills and "compiled" knowledge, knowledge that feels like instinct because it is so ingrained. Reflective is just that, our ability to step outside of a situation and consider it rationally. There are times for reflective engagement, but hurtling around a mountain curve at breakneck speed is not one of them. Norman suggested that a good way to think of designing intelligent systems is to think of a new kind of entity: (human + machine). The (car + driver) system provides all three levels of engagement, with the car providing the visceral intelligence and the human providing the behavioral and reflective intelligences. Cars can usually measure most of what makes our situations safe or dangerous better than we can, because our visceral intelligence evolved under very different circumstances than the ones we now live in. But the car cannot provide the other levels of intelligence, which we have evolved as much more general mechanisms. Norman described several advances in automobile technology that are in the labs or even available on the road: cars with adaptive cruise control; a Lexus that brakes when its on-board camera senses that the driver isn't paying attention; a car that follows lanes automatically; a car that parks automatically, both parallel and head-in. Some of these sound like good ideas, but... In Norman's old model of users and tasks, he spoke of the gulfs of evaluation and execution. In his thinking these days, he speaks of the knowledge gap between human & machine, especially as we more and more think about machines as intelligence. The problem, in Norman's view, is that machines automate the easy parts of a task, and they fail us when things get hard and we most need them. He illustrated his idea with a slide titled "Good Morning, Silicon Valley" that read, in part, "... at the very moment you enter a high-speed crisis, when a little help might come in handy, the system says, 'Here, you take it.'" Those of us who used to work on expert systems and later knowledge-based systems recognize this as the brittleness problem. Expert systems were expert in their narrow niche only. When a system reached the boundary of its knowledge, its performance went from expert to horrible immediately. This differed from human experts and even humans who were not experts, whose performances tended to degrade more gracefully. My mind wandered during the next bit of the talk... Discussion included ad hoc networks of cars on the road, flocking behavior, cooperative behavior, and swarms of cars cooperatively drafting. Then he discussed a few examples of automation failures. The first few were real, but the last two were fiction -- but things he thinks may be coming, in one form or another: Norman then came to another topic familiar to anyone who has done AI research or thought about AI for very long. The real problem here is shared assumptions, what we sometimes now call "common ground". Common ground in human-to-human communication is remarkably good, at least when the people come from cultures that share something in common. Common ground in machine-to-machine is also good, sometimes great, because it is designed. Much of what we design follows a well-defined protocol that makes explicit the channel of communication. Some protocols even admit a certain amount of fuzziness and negotiation, again with some prescribed bounds. But there is little common ground in communication between human and machine. Human knowledge is so much richer, deeper, and interconnected than what we are yet able to provide our computer programs. So humans who wish to communicate with machines must follow rigid conventions, made explicit in language grammars, menu structures, and the like. And we aren't very good at following those kind of rules. Norman believes that the problem lies in the "middle ground". We design systems in which machines do most or a significant part of a task and in which humans handle the tough cases. This creates expectation and capability gaps. His solution: let machine do all of a task -- or nothing. Anti-lock brakes were one of his examples. But what counts as a complete task? It seems to me that this solution is hard to implement in practice, because it's hard to draw a boundary around what is a "whole task". Norman told a short story about visiting Delft, a city of many bicycles. As he and his guide were coming to the city square, which is filled with bicycles, many moving fast, his guide advised him, "Don't try to help them." By this, he meant not to slow down or speed up to avoid a bike, not to guess the cyclist's intention or ability. Just cross the street. Isn't this dangerous? Not as dangerous as the alternative! The cyclist has already seen you and planned how to get through without injuring you or him. If you do something unexpected, you are likely to cause an accident! Act in the standard way so that the cyclist can solve the problem. He will. This story led into Norman's finale, in which he argued that automation should be: The Delft story illustrated that the less flexible, less powerful party should be the more predictable party in an interaction. Machines are still less flexible than humans and so should be as predictable as possible. The computer should act in the standard way so that the human user can solve the problem. She will. Norman illustrated self-explaining with a personal performance of the beeping back-up that most trucks have these days. Ever have anyone explain what the frequency of the beeps means? Ever read the manual? I don't think so. The last item on the list -- assistive -- comes back to what Norman has been preaching forever and what many folks who see AI as impossible (or at least not far enough along) have also been saying for decades: Machines should be designed to assist humans in doing their jobs, not to do the job for them. If you believe that AI is possible, then someone has to do the research to bring it along. Norman probably disagrees that this will ever work, but he would at least say not to turn immature technology into commercial products and standards now. Wait until they are ready. All's I know is... I could really have used a car that was smarter than its driver on Tuesday morning, when I forgot to raise my still-down garage door before putting the car into reverse! (Even 20+ years of habit sometimes fails, even if under predictable conditions.) -----