January 29, 2016 3:43 PM

Marvin Minsky and the Irony of AlphaGo

Semantic Information Processing on my bookshelf
a portion of my bookshelf
(CC BY 3.0 US)

Marvin Minsky, one of the founders of AI, died this week. His book Semantic Information Processing made a big impression on me when I read it in grad school, and his paper Why Programming is a Good Medium for Expressing Poorly Understood and Sloppily-Formulated Ideas remains one of my favorite classic AI essays. The list of his students contains many of the great names from decades of computer science; several of them -- Daniel Bobrow, Bertram Raphael, Eugene Charniak, Patrick Henry Winston, Gerald Jay Sussman, Benjamin Kuipers, and Luc Steels -- influenced my work. Winston wrote one of my favorite AI textbooks ever, one that captured the spirit of Minsky's interest in cognitive AI.

It seems fitting that Minsky left us the same week that Google published the paper Mastering the Game of Go with Deep Neural Networks and Tree Search, which describes the work that led to AlphaGo, a program strong enough to beat an expert human Go player. ( This brief article describes the accomplishment and program at a higher level.) One of the key techniques at the heart of AlphaGo is neural networks, an area Minsky pioneered in his mid-1950s doctoral dissertation and continued to work in throughout his career.

In 1969, he and Seymour Papert published a book, Perceptrons, which showed the limitations of a very simple kind of neural network. Stories about the book's claims were quickly exaggerated as they spread to people who had never read the book, and the resulting pessimism stifled neural network research for more than a decade. It is a great irony that, in the week he died, one of the most startling applications of neural networks to AI was announced.

Researchers like Minsky amazed me when I was young, and I am more amazed by them and their lifelong accomplishments as I grow older. If you'd like to learn more, check out Stephen Wolfram's personal farewell to Minsky. It gives you a peek into the wide-ranging mind that made Minsky such a force in AI for so long.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal

January 28, 2016 2:56 PM

Remarkable Paragraphs: "Everything Is Computation"

Edge.org's 2016 question for sophisticated minds is, What do you consider the most interesting recent [scientific] news? What makes it important? Joscha Bach's answer is: Everything is computation. Read his essay, which contains some remarkable passages.

Computation changes our idea of knowledge: instead of treating it as justified true belief, knowledge describes a local minimum in capturing regularities between observables.

Epistemology was one of my two favorite courses in grad school (cognitive psych was the other), and "justified true belief" was the starting point for many interesting ideas of what constitutes knowledge. I don't see Bach's formulation as a replacement for justified true belief as a starting point, but rather as a specification of what beliefs are most justified in a given context. Still, Bach's way of using computation in such a concrete way to define "knowledge" is marvelous.

Knowledge is almost never static, but progressing on a gradient through a state space of possible world views. We will no longer aspire to teach our children the truth, because like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never ending land of insight.

Knowledge is a never-ending process of refactoring. The phrase "how to productively change their minds" reminds me of Jon Udell's recent blog post on liminal thinking at scale. From the perspective that knowledge is a function, "changing one's mind intelligently" is the dynamic computational process that keeps the mind at a local minimum.

A growing number of physicists understand that the universe is not mathematical, but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable, mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: they still seem to be confused by the apparent dichotomy between mechanism (rigid, moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.

This is a strong claim, and one I'm sympathetic with. However, I think that the apparent distinction between the computational sciences and the non-computational ones is a matter of time, not a difference in kind. It wasn't that long ago that most physicists thought of the universe in mathematical terms, not computational ones. I suspect that with a little more time, the orientation in other disciplines will begin to shift. Neuroscience and psychology are positioned well for such a phase shift.

In any case, Bach's response points our attention in a direction that has the potential to re-define every problem we try to solve. This may seem unthinkable to many, though perhaps not to computer scientists, especially those of us with an AI bent.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

January 24, 2016 10:33 AM

Learn Humility By Teaching

In The Books in My Life, Henry Miller writes about discussing books with an inquisitive friend:

I remember this short period vividly because it was an exercise in humility and self-control on my part. The desire to be absolutely truthful with my friend caused me to realize how very little I knew, how very little I could reveal, though he always maintained that I was a guide and mentor to him. In the brief, the result of those communions was that I began to doubt all that I had blithely taken for granted. The more I endeavored to explain my point of view, the more I floundered. He may have thought I acquitted myself well, but not I. Often, on parting from him, I would continue the inner debate interminably.

I am guessing that most anyone who teaches knows the feeling Miller describes. I feel it all the time.

I'm feeling it again this semester while teaching my Programming Languages and Paradigms course. We're learning Racket as a way to learn to talk about programming languages, and also as a vehicle for learning functional programming. One of my goals this semester is to be more honest. Whenever I find a claim in my lecture notes that sounds like dogma that I'm asking students to accept on faith, I'm trying to explain in a way that connects to their experience. Whenever students ask a question about why we do something in a particular way, I'm trying to help them really to see how the new way is an improvement over what they are used to. If I can't, I admit that it's convention and resolve not to be dogmatic about it with them.

This is a challenge for me. I am prone to dogma, and having programmed functionally in Scheme for a long time, so much of what my students experience learning Racket is deeply compiled in my brain. Why do we do that? I've forgotten, if I ever knew. I may have a vague memory that, when I don't do it that way, chaos ensues. Trust me! Unfortunately, that is not a convincing way to teach. Trying to give better answers and more constructive explanations gives rise to the sort of floundering that Miller talks about. After class, the inner debate continues as I try to figure out what I know and why, so that I can do better.

Some natural teachers may find this easy, but for me, learning to answer questions in a way that really helps students has been a decades-long lesson in humility.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 17, 2016 10:07 AM

The Reluctant Mr. Darwin

Yesterday, I finished reading The Reluctant Mr. Darwin, a short biography of Charles Darwin by David Quammen published in 2006. It covers Darwin's life from the time he returns from his voyage on the HMS Beagle to his death in 1882, with a short digression to discuss Alfred Russel Wallace's early voyages and independent development of ideas on evolution and its mechanisms.

Before reading this book, I knew the basics of Darwin's theories but nothing about his life and very little about the milieu in which he worked and developed his theories. After reading, I have a better appreciation for the caution with which Darwin seemed to have worked, and the care he took to record detailed observations and to support his ideas with evidence from both nature and breeding. I also have a sense of how Wallace's work related to and affected Darwin's work. I could almost feel Darwin's apprehension upon receiving Russell's letter from southeast Asia, outlining ideas Darwin had been developing, refining, and postponing for twenty years.

The Reluctant Mr. Darwin is literary essay, not scholarly history. I enjoyed reading it. The book is at its best when talking about Darwin's life and work as a scientist, his attitudes and his work habits. The writing is clear, direct, and entertaining. When talking about Darwin's theories themselves, however, and especially about their effect in the world and culturally, the book comes across as too earnest and a bit too breathless for my taste. But this is a minor quibble. It's a worthwhile read.


Posted by Eugene Wallingford | Permalink | Categories: General

January 15, 2016 4:02 PM

This Week's Edition of "Amazed by Computers"

As computer scientists get older, we all find ourselves reminiscing about the computers we knew in the past. I sometimes tell my students about using 5.25" floppies with capacities listed in kilobytes, a unit for which they have no frame of reference. It always gets a laugh.

In a recent blog entry, Daniel Lemire reminisces about the Cray 2, "the most powerful computer that money could buy" when he was in high school. It was took up more space than an office desk (see some photos here), had 1 GB of memory, and provided a peak performance of 1.9 gigaflops. In contrast, a modern iPhone fits in a pocket, has 1 GB of memory, too, and contains a graphics processing unit that provides more gigaflops than the Cray 2.

I saw Lemire's post a day after someone tweeted this image of a 64 GB memory card from 2016 next to a 2 GB Western Digital hard drive from 1996:

a 64 GB memory card (2016), a 2 GB hard drive (1996)

The youngest students in my class this semester were born right around 1996. Showing them a 1996 hard drive is like my college professors showing me magnetic cores: ancient history.

This sort of story is old news, of course. Even so, I occasionally remember to be amazed by how quickly our hardware gets smaller and faster. I only wish I could improve my ability to make software just as fast. Alas, we programmers must deal with the constraints of human minds and human organizations. Hardware engineers do battle only with the laws of the physical universe.

Lemire goes a step beyond reminiscing to close his entry:

And what if, today, I were to tell you that in 40 years, we will be able to fit all the computational power of your phone into a nanobot that can live in your blood stream?

Imagine the problems we can solve and the beauty we can make with such hardware. The citizens of 2056 are counting on us.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development

January 12, 2016 3:58 PM

Peter Naur and "Datalogy"

Peter Naur died early this year at the age of 87. Many of you may know Naur as the "N" in BNF notation. His contributions to CS were much broader and deeper than BNF, though. He received the 2005 Turing Award in recognition of his contributions to programming language and compiler design, including his involvement in the definition of Algol 60. I have always been a huge fan of his essay Programming as Theory Building, which I share with anyone I think might enjoy it.

When Michael Caspersen sent a note to the SIGCSE mailing list, I learned something new about Naur: he coined the term datalogy for "the science of the nature and use of data" and suggested that it might be a suitable replacement for the term "computer science". I had to learn more...

It turns out that Naur coined this term in a letter to the Communications of the ACM, which ran in the July 1966 under the headline "The Science of Datalogy". This letter is available online through the ACM digital library. Unfortunately, this is behind a paywall for many of you who might be interested. For posterity, here is an excerpt from that page:

This is to advocate that the following new words, denoting various aspects of our subject, be considered for general adoption (the stress is shown by an accent):
  • datálogy, the science of the nature and use of data,
  • datamátics, that part of datalogy which deals with the processing of data by automatic means,
  • datámaton, an automatic device for processing data.

In this terminology much of what is now referred to "data processing" would be datamatics. In many cases this will be a gain in clarity because the new word includes the important aspect of data representations, while the old one does not. Datalogy might be a suitable replacement for "computer science."

The objection that possibly one of these words has already been used as a proper name of some activity may be answered partly by saying that of course the subject of datamatics is written with a lower case d, partly by remembering that the word "electronics" is used doubly in this way without inconvenience.

What also speaks for these words is that they will transfer gracefully into many other languages. We have been using them extensively in my local environment for the last few months and have found them a great help.

Finally I wish to mention that datamatics and datamaton (Danish: datamatik and datamat) are due to Paul Lindgreen and Per Brinch Hansen, while datalogy (Danish: datalogi) is my own invention.

I also learned from Caspersen's email that Naur was named the first Professor in Datalogy in Denmark, and held that titled at the University of Copenhagen until he retired in 1998.

Naur was a pioneer of computing. We all benefit from his work every day.


Posted by Eugene Wallingford | Permalink | Categories: Computing

January 11, 2016 10:51 AM

Some Writing by Administrators Isn't Bad; It's Just Different

Jim Garland is a physicist who eventually became president of Miami University of Ohio. In Bad Writing by Administrators, Rachel Toor asked Garland how his writing evolved as he moved up the administrative hierarchy. His response included:

Truthfully, I did my deepest thinking as a beginning assistant professor, writing obscure papers on the quantum-mechanical properties of solids at liquid-helium temperatures. Over the years, I became shallower and broader, and by the time I left academe, I was worrying about the seating arrangement of donors in the president's football box.

I have experienced this even in my short step into the department head's office. Some of the writing I do as head is better than my writing before: clear, succinct, and qualified precisely. It is written for a different audience, though, and within a much different political context. My colleagues compliment me occasionally for having written a simple, straightforward note that says something they've been struggling to articulate.

Other times, my thinking is more muddled, and that shows through in what I write. When I try to fix the writing before I fix my thinking, I produce bad writing.

Some writing by administrators really is bad, but a lot of it is simply broader and shallower than what we write as academics. The broader it becomes, the less interesting the content is to the academics still living inside of us. Yet our target audience often can appreciate the value of that less interesting writing when it serves its purpose.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

January 07, 2016 1:52 PM

Parsimony and Obesity on the Web

Maciej Cegłowski is in fine form in his talk The Website Obesity Crisis. In it, he mentions recent projects from Facebook and Google to help people create web pages that load quickly, especially for users of mobile devices. Then he notes that their announcements do not practice what the projects preach:

These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies.

There is even more irony in creating special subsets of HTML "designed to be fast on mobile devices".

Why not just serve regular HTML without stuffing it full of useless crap?
William Howard Taft, a president of girth
Wikipedia photo
(photographer not credited)

Indeed. Cegłowski offers a simple way to determine whether the non-text elements of your page are useless, which he dubs the Taft Test:

Does your page design improve when you replace every image with William Howard Taft?

(Taft was an American president and chief justice widely known for his girth.)

My blog is mostly text. I should probably use more images, to spice up the visual appearance and to augment what the text says, but doing so takes more time and skill than I often have at the ready. When I do use images, they tend to be small. I am almost certainly more parsimonious than I need to be for most Internet connections in the 2010s, even wifi.

You will notice that I never embed video, though. I dug into the documentation for HTML and found a handy alternative to use in its place: the web link. It is small and loads fast.


Posted by Eugene Wallingford | Permalink | Categories: Computing