I'd like to open with a story:
I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
A: "My hair is shingled, and the longest strands are about nine inches long."
In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
The selection that I read to you just now comes from Alan Turing's seminal paper, "Computing Machinery and Intelligence", which appeared in the journal Mind, Volume LIX, Number 236, 1950. Because the paper is now in the public domain, I can make it available to you on-line.
Many people credit this paper, along with Claude Shannon's 1950 paper on the possibility of a chess-playing program, with launching AI as an idea worthy of scientific pursuit. Credit for launching AI as a discipline usually goes to the now-titled Dartmouth Conference of 1956, which gathered most of the great pioneers of AI who created the discipline over the next forty years.
Please read Turing's paper for class Thursday. We will use it to launch our investigation into artificial intelligence.
I am sure that you all have some idea of what you think AI is. Can you state it in words?
Before we try to define AI, consider the "big question" that it asks:
How can minds work?
There are, of course, related questions:
These are significant questions, ones that AI didn't invent. (Except, perhaps, when we ask if general-purpose digital computers can have minds.) Philosophers, linguists, neuroscientists, and cognitive psychologists all work on questions that are quite similar to the questions that AI scientists address. Each of these disciplines has its own focus, tools, and methodology.
Computer scientists model the world using computation, the processing of information. It turns out that computation, and its embodiment in the digital computer, give us a tool of unparalleled flexibility. AI scientists use this flexibility as a portal into the age-old problem of minds, in what they consist, and how they work.
AI attempts to go beyond the ordinary limitations we place on the kinds of the problems we try to solve, the kinds of models we try to build. It draws inspiration and motivation from the fact that humans routinely solve problems that computational theory tells us are intractable, computationally infeasible under resource limitations. Consider something as simple as playing a game of chess. In the early 1970s someone wrote the following bit of trivia:
If every man, woman, and child on earth were to spend every waking moment playing chess (16 hours per day) at the rate of one game per minute, it would take 146 billion years to use every variation of the first 10 moves.
Now, the population of our planet has grown by 50% or so since that time, but even still it would take 97 billion years. (Ah, the power of combinatorial explosion!)
Yet humans play chess, sometimes quite well. The thing is, we play chess the way we do most things: We don't often arrive at the best solutions, but we usually do arrive at solutions that are good enough. This idea of satisficing rather than optimizing when confronted with an intractable problem is central to what AI is about, to what separates it from other algorithmics in computer science.
Most people would characterize AI as a subset of CS, saying that it focuses on a specific set of problems and techniques. But given my view of CS as modeling the world, one might turn this relation on its head: CS as we usually practice it is in fact a subset of AI, since it focuses primarily on problems that are tractable, ones that we already know how to solve. An intelligent agent must be able to model the world, to think about whatever is in its environment, and to handle both tractable and intractable problems well enough to achieve its goals.
AI seeks to loosen the constraints currently placed on our programs:
Now I will give you a definition of AI that I like. It is by no means perfect, but it captures the flavor of what I think AI is about.
Artificial intelligence is the computational study of how a system can perceive, reason, and act in complex environments.
We will continue with our discussion of what AI is next time.
Often, I am asked by students if AI is hot, if it will get them a job, or even if it is relevant to their studies. I can't answer the first two questions very well, because they deal with fads, trends. I am interested in the longer term, what we can do for your minds.
I can tell you that AI will always be relevant to the student of CS. In one sense, it doesn't matter what AI is, or whether it's hot, or whether it will get you a job on its own. Every practitioner of computer science should be familiar with the field, its techniques, and its current interests.
Why? you may ask. Well, consider this list of topics related to computers and computer science:
They all have their roots in AI research. Why such disparate results, some of which are seemingly as unintelligent as can be? Because AI is at the frontier of CS research. It aims at the hardest problems and occasionally requires tools that do not yet exist. So AI researchers have to create them, or at least start the ball rolling.
They do that in a lot of different ways. Some focus on "real-world" applications, like scheduling satellite experiments or screening credit card applicants or writing music. Others start with more whimsical problems, like playing chess or having on-line agents with personas or telling interesting stories. But both groups encounter unexpected difficulties and advance the cause of AI and computer science. (The first group works to identify the principles that underlie its results, and the second works to demonstrate that the lessons it learned apply to real problems.)
So, I think that you are well-served to learn about what is going on at this frontier of computing.
Welcome to 810:161, Artificial Intelligence. As an AI scientist, I feel an obvious fondness for this course. It expands the view that you may have of computer science to include new problems and new techniques.
This can be an exciting course for you if you are interested in how computer systems might do things beyond the pale of what many folks think of as computer science -- solve problems that we don't know how to solve, plan, diagnose, see, process human language, play complex games, etc. It is always more fun and challenging to be on the frontier of knowledge than to be doing the same old thing.
I am Eugene Wallingford, and I'll be your instructor for Artificial Intelligence. This course will probably be different than any other CS course you have ever had, because ...
Study your syllabus carefully. It contains the policies by which we will run this course. You will need to know these policies and when they apply. You will also find a tentative schedule on the last page, including expected exam dates.
This course comprises two sections. Section 1 is a 3-credit "lecture" course on AI, with no programming. Section 2 is a 4-credit course that meets for the lecture and adds a 1-credit laboratory, with significant amounts of AI programming. The only way that we'll be able to tell these sections apart is that some of you will be doing programming outside of the lectures, and some will not.
Why two sections?
Which section should you be in? (Are you a B.S. major in computer science? If yes, you should be in Section 2...)