TITLE: Pleasantly Surprising Interconnections
AUTHOR: Eugene Wallingford
DATE: June 11, 2006 2:05 PM
DESC:
-----
BODY:
The most recent issue of the
Ballast Quarterly Review,
on which I've
commented before,
came out a month or so. I had set it aside for
the right time to read and only came back around
to it yesterday. Once again, I am pleasantly
surprised by the interconnectedness of the world.
In this issue, editor
Roy Behrens
reviews John Willats's book
Making Sense Of Children's Drawings.
(The review is available on-line at
Leonardo On-Line.)
Some researchers have claimed that children draw what
they know and that adults draw what they
see, and that what we adults think
we see interferes with our ability to create authentic
art. Willats presents evidence that young children draw
what they see, too, but that at that stage of neural
development they see in an object-centered manner, not
a viewer-centered manner. It is this subjectivity of
perspective that accounts for the freedom children have
in creating, not their bypassing of vision.
The surprising connection for came in the form of
David Marr.
A vision researcher at MIT, Marr had proposed the
notion that we "see by processing phenomena in two
very distinct ways", which he termed viewer-centered
object-centered. Our visual system gathers data in
a viewer-centered way and then computes from that
data more objective descriptions from which we can
reason.
Where's the connection to computer science and my
experience? Marr also wrote one of the seminal papers
in my development as an artificial intelligence
researcher, his "Artificial Intelligence: A Personal
View". You can find this paper as Chapter 4 in John
Haugeland's well-known collection Mind Design
and on-line as a
(PDF)
at Elsevier.
In this paper, Marr suggested that the human brain may
permit "no general theories except ones so unspecific
as to have only descriptive and not predictive powers".
This is, of course, not a pleasant prospect for a
scientist who wishes to understand the mind, as it
limits the advance of science as a method. To the extent
that the human mind is our best existence proof of
intelligence, such a limitation would also impinge on the
field of artificial intelligence.
I was greatly influenced by Marr's response to this
possibility. He argued strongly that we should not settle
for incomplete theories at the implementation level of
intelligence, such as neural network theory, and should
instead strive to develop theories that operate at the
computational and algorithmic levels. A theory at the
computational level captures the insight into the nature
of the information processing problem being addressed,
and a theory at the algorithmic level captures insight
into the different forms that solutions to this
information processing problem can take. Marr's
argument served as an inspiration for the work of the
knowledge-based systems lab in which I did my graduate
work, founded on the earlier work on the
generic task model
of
Chandrasekaran.
Though I don't do research in that area any more, Marr's
ideas still guide how I think about problems, solutions,
and implementations. What a refreshing reminder of Marr
to encounter in light reading over the weekend.
Behrens was likely motivated to review Willats's book for
the potential effect that his theories might have on the
"day-to-day practice of teaching art". As you might
guess, I am now left to wonder what the implications
might be for teaching children and adults to write programs.
Direct visual perception has less to do with the programs
an adult writes, given the cultural context and levels
of abstraction that our minds impose on problems, but
children may be able to connect more closely with the
programs they write if we place them in environments
that get out of the way of their object-centered view
of the world.
-----