Session 17
Ethics at the Fringe
810:171
Software Systems
Exercise 25: Eliza and the Three Bears
Goals
- To consider more the kinds of problems to which computing can be applied.
- To explore ethical issues that can arise when applying computing.
- To understand better what artificial intelligence is.
Tasks
Work in teams of three or four people based on the number in the upper
right hand corner of this page.
When we construct an "intelligent" system, of the sort we discussed last
time, we typically try to solve problems at the fringe of our technical
knowledge. As a result, thorny ethical questions can arise. Striking a
balance between the advantages and disadvantages of a technology is a task
that faces computing professionals nearly every day.
- As an individual, choose one of the following "themes" and write a one
paragraph scenario that involves an intelligent system
- applied in an obviously ethical way
- applied in an obviously unethical way
- applied in a manner which raises serious, unanswered ethical
questions, the sort of questions that cause different people to
place it in any one of these three groups
Do not tell your teammates which theme you have chosen. Make your
scenarios as simple and as realistic as possible--don't be absurd.
- As a group, tell each other your scenarios. See if your teammates can
guess what your theme was. What is it about each scenario that makes
it "obviously ethical", "obviously unethical", or neither?
- As a group, select three of your scenarios so that you have one example of
each of the themes listed above. The scenario you select should be the
"best" example of that theme developed by your group. If no one in the
group wrote a scenario for one of the themes, then write one now as a
group.
Results
- You will present your group's scenarios to the class.
- We may have a bit more fun with the scenarios.
- You will submit your group's written answers.
Summary of Exercise 25
What is the significance of the title?
- Eliza: an early "intelligent" system that raised serious ethical
issues for its own programmer--and changes his and many other
peoples' minds about the ethical implications of AI
- The Three Bears: one is too hot, one is too cold, and one is just
right. (Flipped on its head here.)
What is the value in this sort of exercise?
- Many issues aren't points in the space of ideas; each lies on a
continuum, a point between two extremes. Understanding such issues
and their implications only occurs when you appreciate the continuum
and what makes points on it hard to place. (Direct manipulation
versus menus, for example.)
Is obviousness always obvious? What does that mean for how we practice
our profession?
Exercise 26: Exploring Direct Manipulation
Goals
- To understand better direct manipulation interfaces.
- To understand better test writing and test taking.
Tasks
Work in ths same teams...
- Describe in detail an application of direct manipulation to some domain
not described in Chapter 6. You may choose to describe a direct
manipulation interface for an existing, non-DM software system, or you
may choose to propose a direct manipulation interface for a domain in
which no software system already exists.
- Write an exam question on material in Chapter 6 of Shneiderman. Your
question should be the kind of question that you like to see on an exam
(a question that allows you to demonstrate your knowledge) and the kind
that I like to see (a question that requires you to demonstrate your
understanding).
- As a side effect of your question writing, identify the most important
idea in Chapter 6. If you had to boil the whole chapter into one sentence
(or three), what would you write?
Use this exercise as an opportunity to discuss any ideas from Chapter 6
that are not quite clear in your mind after the first reading. And ask
questions when you have the opportunity!
Results
- Your group submits its answers in writing to me.
- We discuss your answers in class and try to understand DM better. We also
try to determine what makes for a reasonable exam question.
Eugene Wallingford ====
wallingf@cs.uni.edu ====
March 6, 2001