TITLE: Psychohistory, Economics, and AI AUTHOR: Eugene Wallingford DATE: August 03, 2011 7:55 PM DESC: ----- BODY: Or, The Best Foresight Comes from a Good Model Hari Seldon from the novel Foundation In my previous entry, I mentioned re-reading Asimov's Foundation Trilogy and made a passing joke about psychohistory being a great computational challenge. I've never heard a computer scientist mention psychohistory as a primary reason for getting involved with computers and programming. Most of us were lucky to see so many wonderful and more approachable problems to solve with a program that we didn't need to be motivated by fiction, however motivating it might be. I have, though, heard and read several economists mention that they were inspired to study economics by the ideas of psychohistory. The usual reason for the connection is that econ is the closest thing to psychohistory in modern academia. Trying to model the behavior of large groups of people, and reaping the advantages of grouping for predictability, is a big part of what macroeconomics does. (Asimov himself was most likely motivated in creating psychohistory by physics, which excels at predicting the behavior of masses of atoms over predicting the behavior of individual atoms.) As you can tell from recent history, economists are no where near the ability to do what Hari Seldon did in Foundation, but then Seldon did his work more than 10,000 years in the future. Maybe 10,00 years from now economists will succeed as much and as well. Like my economist friends, I too am intrigued by economics, which also shares some important features in common with computer science, in particular a concern with the trade-offs among limited resources and the limits of rational behavior. The preface to the third book in Asimov's trilogy, Second Foundation, includes a passage that caught my eye on this reading:
He foresaw (or he solved his [system's] equations and interpreted its symbols, which amounts to the same thing)...
I could not help but be struck by how this one sentence captured so well the way science empowers us and changes the intellectual world in which we live. Before the rapid growth of science and broadening of science education, the notion of foresight was limited to personal experience and human beings' limited ability to process that experience and generalize accurately. When someone had an insight, the primary way to convince others was to tell a good story. Foresight could be feigned and sold through stories that sounded good. With science, we have a more reliable way to assess the stories we are told, and a higher standard to which we can hold the stories we are told. (We don't always do well enough in using science to make us better listeners, or better judges of purported foresights. Almost all of us can do better, both in professional settings and personal life.) As a young student, I was drawn to artificial intelligence as the big problem to solve. Like economics, it runs directly into problems of limited resources and limited rationality. Like Asimov's quote above, it runs directly into the relationship between foresight and accurate models of the world. During my first few years teaching AI, I was often surprised by how fiercely my students defended the idea of "intuition", a seemingly magical attribute of men and women forever unattainable by computer programs. It did me little good to try to persuade them that their belief in intuition and "gut instinct" were outside the province of scientific study. Not only didn't they care; that was an integral part of their belief. The best thing I could do was introduce them to some of the techniques used to write AI programs and to show them such programs behaving in a seemingly intelligent manner in a situation that piqued my students' interest -- and maybe opened their minds a bit. Over the course of teaching those early AI courses, I was eventually able to see one of the fundamental attractions I had to the field. When I wrote an AI program, I was building a model of intelligent behavior, much as Seldon's psychohistory involved building a model of collective human behavior. My inspiration did not come from Asimov, but it was similar in spirit to the inspiration my economist friends' drew from Asimov. I have never been discouraged or deterred by any arguments against the prospect of artificial intelligence, whether my students' faith-based reasons or by purportedly rational arguments such as John Searle's Chinese room argument. I call Searle's argument "purportedly rational" because, as it is usually presented, ultimately it rests on the notion that human wetware -- as a physical medium -- is capable of representing symbols in a way that silicon or other digital means cannot. I always believed that, given enough time and enough computational power, we could build a model that approximated human intelligence as closely as we desired. I still believe this and enjoy watching (and occasionally participating in) efforts that create more and more intelligent programs. Unlike many, I am undeterred by the slow progress of AI. We are only sixty years into an enterprise that may take a few thousand years. Asimov taught me that much. -----