Work in teams of three or four people based on the number in the upper right hand corner of this page.
As a team, develop a set of grading criteria for each question. Assume that each was worth 10 points. Identify the components of a "perfect" answer and decide how many points each component was worth. Are there answer elements that you would consider above and beyond the call of duty? If so, identify them and say how you would evaluate them in assigning a grade.
As you grade these answers, develop a list of guidelines for exam takers. At the end, assign a grade (0-10) to each answer.
Submit your grading schemes and your graded exam answers.
Build a master list of guidelines for exam takers.
Why did I ask you to do this?
Criteria for grading Question 1: Does the answer have a logical form, with claim and evidence to support the claim? Is the answer relevant--that is, does it answer the question? What is the key point, for or against?
Criteria for grading Question 2: ...
Which question is easier to grade?
How can you support an answer? With examples, principles, analogies...
Issues in grading questions:
Which of these questions received the best grade on your criteria? Which answer do you think is best? (These aren't always the same...)
Issues in writing answers:
Other issues regarding the exam:
Work in teams of three or four people based on the number in the upper right hand corner of this page.
Which of these tasks do you think would be the easiest to implement in a computer program? The hardest? Which of these do you think would be most beneficial for mankind if a software system could do it? The least beneficial?
Would the kinds of user interfaces we build today be good enough for this system? If yes, why? If no, in what ways would we want to change the form and quality of the interface? Would we want to use a different design-and-implementation life cycle to engineer these systems than we use to design current software systems? Why or why not?
Submit your group's answers to the discussion questions.
We will clarify any questions that you run into during the exercise.
You are now reading stories in The Case of the Killer Robot that deal (ostensibly) with artificial intelligence. Our goal here is not to study AI in depth but rather to consider it as a creator of software systems: What are the implications for human-computer interaction? What are the implications for our professional responsibility?
A couple of years ago, AI of a sort reached the mass media when IBM's Deep Blue, a special-purpose chess playing computer, played two matches with world chess champion Garry Kasparov. Deep Blue won the second match and created a firestorm of questions about whether humans had been surpassed in the intellectual arena by a machine. What do you think?
Defining AI is difficult, because we aren't sure what we mean by "intelligence" when we talk about humans, let alone machines. Marvin Minsky, one of the founders of AI, defines AI as "making computers do what humans are good at". Others say, "programming a computer to do something that, if done by a human, we would consider intelligent". (Notice the subtle difference in these definitions.) Finally, the most tongue-in-cheek definition I've ever seen is, "if we can do it, then it's not AI". Kind of tough to make progress as a researcher in a world with such a mindset.
Here is my favorite "technical" definition: AI is the study of the computations that make it possible to perceive, reason, and act in a complex environment.
How does AI differ from psychology? Philosophy? The rest of CS? It shares with all three questions of interest and methods for seeking answers to these questions. But it is different from each.
What kinds of problems does AI study? Given the breadth of definitions, you should not be surprised to find that the problems we study come from a wide spectrum. The items in the exercise today give a pretty good sampling. Studying formal systems such as mathematics and game playing was more dominant early on in AI, but now researchers tend to focus on more realistic tasks. (In many ways the formal tasks are too easy--because they are formally defined!)
What does an AI scientist do?
The key question I ask you to think about for this course is: How does AI affect the way we build interfaces and the kind of interfaces we build?
Keep in mind, too, that what AI seeks to do may seem magical today, but today's "common" software would have looked like to magic to people in 1955, 1965, 1975, 1985--maybe even 1995!