In Exercise 1, we identified kinds of software systems and some criteria by which software can be classified. In Exercise 2, you are taking this idea forward by thinking about the kinds of criteria available to us.
Be sure to name your various groupings of the criteria. A good name can capture the essence of the grouping. Finding a good name can help you to understand why you made the grouping in the first place...
The most common distinction made was "human" versus "technical", where "technical" variously indicated administrative concerns, developer's concerns, and purchase concerns. "human" usually meant "relating to the user".
Many of your groupings of criteria hint at why some dimensions you considered last time seemed prone to broader agreement within your teams than others, and why some dimensions seemed to make for easier classification.
Some students have made another interesting distinction that I characterize as "now versus later". Which of these dimensions matter now, and which matter in the future. What is neat is that this distinction depends in part on how we define "now", and so can be many distinctions in one!
A couple of different approaches to classification were:
Under many schemes, some criteria rightfully belong in multiple categories.
You are creating a framework for analysis, a set of standards and terms to use when analyzing software systems. Much of our work this semester, based on Shneiderman, will do the same thing--with a particular focus on human interaction with programs.
On an exercise like this, there is no "right" answer, but some answers are better than others. "Better" might mean "more complete", "better explained", "more helpful", etc. The value you get from doing the exercise is not getting the right answer, but understanding the question and understanding what you should be thinking about when trying to answer it.
How should you go about this? You could work off the top of your heads, collectively brainstorming groupings that seem meaningful to you. When you use this approach, often the most valuable part of the exercise is in trying to give a meaningful name to groupings that seem so intuitive.
Unfortunately, brainstorming and intuition can take you only so far. If nothing else, they tend to be limited by your own experiences, which are incomplete and often not representative of the whole community of software user. That's where your reading assignments can help. Study the text, and then try to apply what you've read. That's the best way I know of to learn the textbook material anyway. And the readings give you some objective basis for guiding your thoughts.
The content of this course differs somewhat in emphasis from other computer science courses. Here are the key ideas:
Now that we have thought about the criteria by which software can be judged, we want:
Work in the same teams to do the following:
Document your design with pictures, descriptions, and anything else that you think gets your ideas across to someone not in your team. Be sure to write your names at the top, as well as the type of system your interface is for.
Do not write on the other team's design. Write your critique on a separate page, but be sure to identify the team whose design you evaluated.
Questions to ask yourself:
It is hard to design "something new", because these tools seem so simple, with so obvious a solution. Besides, many of these interfaces are quite mature and standardized across the industry. But the real problem is that our personal experience with the tools puts us in a straight jacket. It is almost impossible for us to break out of these bounds. Are we doomed to nothing but incremental improvements to the existing interfaces for the tools, or can we design something truly new? If so, how?
Revolution doesn't happen very often. That is probably a good thing, too.
Designing for a novice differs from designing for an expert. How much guidance and help should the system provide? How much?
And, to complicate matters, novices can turn into experts over time. Can experts "become" novices? (Maybe when using an unusual part of the system, or when they go a long period of time between using the system?)
This was another chance to apply your reading, and "study" through the in-class exercise...
Did you consider any of the principles from your reading? Or did you base your design and evaluation solely on your own experience?
Your experience shapes your view of the world. But you are not equal to your software's users. One goal of this course is to give you tools for evaluating software that allow you to step outside the strightjacket of your own view of software.
A recurring theme: The fact that there is no right answer does not mean that all are answers are equally good. ...
Think of this as the pre-test week ...
On a more mundane note: How you write up your results matters! Other people have to be able to read them, even if that other person is only me.