Session 18

Learning by Evolution


810:161

Artificial Intelligence


today's slides in PDF


An Opening Exercise

Consider the following learning algorithm:

  1. Create a table out of all training examples.

  2. Determine which classification occurs most often in the table, and call it d.

  3. For inputs that are in the table, return the classification associated with it, or the most common one if there are multiple ones.

  4. For inputs not in the table, return d.

How well does this algorithm work on the training set for our restaurant waiting problem?

Can you think of a (kind of) problem for which this algorithm works well? Not well at all?


Why Does Learning Work?

Let's call this...

The Principle of Convergent Intelligence

The world manifests constraints and regularities. If an agent is to exhibit intelligence, then it must exploit these constraints and regularities, no matter the nature of its physical make-up.

This principle makes three assumptions, each of which tells us something about intelligence (and so AI):

Humans seem to have innate ability to recognize and exploit certain regularities (language, vision).

Learning works only if the world contains constraints and regularities. And if the world is regular, then learning is indispensable to recognizing and exploiting previously unknown regularities.


Motivation for a New Kind of Learning

The intelligent agents we know about in the universe are not "programmed", at least not in the sense that we write programs to handle payroll.

Instead, they are born as a product of two intelligent agents already in existence, and begin with a "program" derived from the programs of their parents.

The programs of their parents were created in a similar fashion, and the chain of such creation goes back many, many years.

Furthermore, we believe that (at least way back in time) the ability to produce offspring favored those agents that could survive long enough to do so, and that the family of agents grew better able to survive over time.

Isn't this a form of "corporate" learning?


The Genetic Metaphor

Our theories about the experience of living organisms on Earth over time caused some people interested in AI to consider how we might try to create an intelligent agent more in the style of the one existence proof that we have that intelligent agents exist.

What would it be like to program a computer by writing some simple programs, teaching them how to reproduce, and then letting the "family" of programs evolve?

If we adopt such a "genetic" metaphor for programming and learning, then we might learn something from the original domain of genetics, biology. In particular, what might these terms mean in a programming sense?


The Origin of Genetic Algorithms

Genetic approaches to programming originated with practitioners who wanted to write programs to decide what kind of "thing" their input described.

This process is often called classification. Agents that do only classification can be implemented as reflex or table-driven agents.

One way to do classification is to use a set of production rules that map inputs onto categories.

One way to write a program that uses production rules to classify is, well, to write the rules.


FILL IN THE BLANK

... as time permits.


Wrap Up


Eugene Wallingford ==== wallingf@cs.uni.edu ==== November 8, 2001