Session 16

Thinking Backward and Forward


810:171

Software Systems


Exercise 23: Reconsidering the Content of Exam Questions

Goals

  1. To reconsider the issues of user-interface design and professional ethics.

  2. To understand a little better the task of exam-taking.

Tasks

Work in teams of three or four people based on the number in the upper right hand corner of this page.

  1. Here are two questions from Exam 1:

    As a team, develop a set of grading criteria for each question. Assume that each was worth 10 points. Identify the components of a "perfect" answer and decide how many points each component was worth. Are there answer elements that you would consider above and beyond the call of duty? If so, identify them and say how you would evaluate them in assigning a grade.

  2. Use your grading scheme to evaluate a few sample answers to one of the questions from Exam 1. Be sure to follow the grading criteria you developed in Task 1. If you find that you need to modify the criteria, be sure to make notes about the changes you made and the reason for the changes.

    As you grade these answers, develop a list of guidelines for exam takers. At the end, assign a grade (0-10) to each answer.

    Results

    Submit your grading schemes and your graded exam answers.

    Build a master list of guidelines for exam takers.


    Sample Exam Answers to the McMurdock Question


    Summary from Exercise 23

    Why did I ask you to do this?

    Criteria for grading Question 1: Does the answer have a logical form, with claim and evidence to support the claim? Is the answer relevant--that is, does it answer the question? What is the key point, for or against?

    Criteria for grading Question 2: ...

    Which question is easier to grade?

    How can you support an answer? With examples, principles, analogies...

    Issues in grading questions:

    Which of these questions received the best grade on your criteria? Which answer do you think is best? (These aren't always the same...)

    Issues in writing answers:

    Other issues regarding the exam:


    Exercise 24: Software Systems in the Future

    Goals

    1. To understand better what artificial intelligence is.
    2. To consider the effects that AI will have on traditional software.
    3. To consider the extent to which AI will support new kinds of software.

    Tasks

    Work in teams of three or four people based on the number in the upper right hand corner of this page.

    1. Consider these things that a software system of the future might do:

      • play any board game better than any person
      • solve calculus word problems
      • recognize human faces
      • interact using the natural language of the user
      • recognize and generate spoken words
      • design a house
      • write a legal argument for use in a court of law
      • diagnose a human illness or some car trouble
      • read a company's financial statements to determine if it carries too much debt
      • plan a trip for six people over spring break
      • create the CS department's teaching schedule each semester
      • learn how to do any of the above better

      Which of these tasks do you think would be the easiest to implement in a computer program? The hardest? Which of these do you think would be most beneficial for mankind if a software system could do it? The least beneficial?

    2. Suppose that we have implemented a software system that can diagnose everyday ailments in human patients without assistance from a human doctor.

      Would the kinds of user interfaces we build today be good enough for this system? If yes, why? If no, in what ways would we want to change the form and quality of the interface? Would we want to use a different design-and-implementation life cycle to engineer these systems than we use to design current software systems? Why or why not?

    Results

    Submit your group's answers to the discussion questions.

    We will clarify any questions that you run into during the exercise.


    Summary from Exercise 24

    You are now reading stories in The Case of the Killer Robot that deal (ostensibly) with artificial intelligence. Our goal here is not to study AI in depth but rather to consider it as a creator of software systems: What are the implications for human-computer interaction? What are the implications for our professional responsibility?

    A couple of years ago, AI of a sort reached the mass media when IBM's Deep Blue, a special-purpose chess playing computer, played two matches with world chess champion Garry Kasparov. Deep Blue won the second match and created a firestorm of questions about whether humans had been surpassed in the intellectual arena by a machine. What do you think?

    Defining AI is difficult, because we aren't sure what we mean by "intelligence" when we talk about humans, let alone machines. Marvin Minsky, one of the founders of AI, defines AI as "making computers do what humans are good at". Others say, "programming a computer to do something that, if done by a human, we would consider intelligent". (Notice the subtle difference in these definitions.) Finally, the most tongue-in-cheek definition I've ever seen is, "if we can do it, then it's not AI". Kind of tough to make progress as a researcher in a world with such a mindset.

    Here is my favorite "technical" definition: AI is the study of the computations that make it possible to perceive, reason, and act in a complex environment.

    How does AI differ from psychology? Philosophy? The rest of CS? It shares with all three questions of interest and methods for seeking answers to these questions. But it is different from each.

    What kinds of problems does AI study? Given the breadth of definitions, you should not be surprised to find that the problems we study come from a wide spectrum. The items in the exercise today give a pretty good sampling. Studying formal systems such as mathematics and game playing was more dominant early on in AI, but now researchers tend to focus on more realistic tasks. (In many ways the formal tasks are too easy--because they are formally defined!)

    What does an AI scientist do?

    The key question I ask you to think about for this course is: How does AI affect the way we build interfaces and the kind of interfaces we build?

    Keep in mind, too, that what AI seeks to do may seem magical today, but today's "common" software would have looked like to magic to people in 1955, 1965, 1975, 1985--maybe even 1995!


    Eugene Wallingford ==== wallingf@cs.uni.edu ==== March 1, 2001