TITLE: Some Advice for How To Think, and Some Personal Memories AUTHOR: Eugene Wallingford DATE: August 07, 2016 10:36 AM DESC: ----- BODY: I've been reading a bunch of the essays on David Chapman's Meaningness website lately, after seeing a link to one on Twitter. (Thanks, @kaledic.) This morning I read How To Think Real Good, about one of Chapman's abandoned projects: a book of advice for how to think and solve problems. He may never write this book as he once imagined it, but I'm glad he wrote this essay about the idea. First of all, it was a fun read, at least for me. Chapman is a former AI researcher, and some of the stories he tells remind me of things I experienced when I was in AI. We were even in school at about the same time, though in different parts of the country and different kinds of program. His work was much more important than mine, but I think at some fundamental level most people in AI share common dreams and goals. It was fun to listen as Chapman reminisced about knowledge and AI. He also introduced me to the dandy portmanteau anvilicious. I keep learning new words! There are so many good ones, and people make up the ones that don't exist already. My enjoyment was heightened by the fact that the essay stimulated the parts of my brain that like to think about thinking. Chapman includes a few of the heuristics that he intended to include in his book, along with anecdotes that illustrate or motivate them. Here are three:
All problem formulations are "false", because they abstract away details of reality.
Solve a simplified version of the problem first. If you can't do even that, you're in trouble.
Probability theory is sometimes an excellent way of dealing with uncertainty, but it's not the only way, and sometimes it's a terrible way.
He elaborates on the last of these, pointing out that probability theory tends to collapse many different kinds of uncertainty into a single value. This does not work all that well in practice, because different kinds of uncertainty often need to be handles in very different ways. Chapman has a lot to say about probability. This essay was prompted by what he sees as an over-reliance of the rationalist community on a pop version of Bayesianism as its foundation for reasoning. But as an old AI researcher, he knows that an idea can sound good and fail in practice for all sorts of reasons. He has also seen how a computer program can make clear exactly what does and doesn't work. Artificial intelligence has always played a useful role as a reality check on ideas about mind, knowledge, reasoning, and thought. More generally, anyone who writes computer programs knows this, too. You can make ambiguous claims with English sentences, but to write a program you really have to have a precise idea. When you don't have a precise idea, your program itself is a precise formulation of something. Figuring out what that is can be a way of figuring out what you were really thing about in the first place. This is one of the most important lessons college students learn from their intro CS courses. It's an experience that can benefit all students, not just CS majors. Chapman also includes a few heuristics for approaching the problem of thinking, basically ways to put yourself in a position to become a better thinker. Two of my favorites are:
Try to figure out how people smarter than you think.
Find a teacher who is willing to go meta and explain how a field works, instead of lecturing you on its subject matter.
This really is good advice. Subject matter is much easier to come by than deep understanding of how the discipline work, especially in these days of the web. The word meta appears frequently throughout this essay. (I love that the essay is posted on the metablog/ portion of his site!) Chapman's project is thinking about thinking, a step up the ladder of abstraction from "simply" thinking. An AI program must reason; an AI researcher must reason about how to reason. This is the great siren of artificial intelligence, the source of its power and also its weaknesses: Anything you can do, I can do meta. I think this gets at why I enjoyed this essay so much. AI is ultimately the discipline of applied epistemology, and most of us who are lured into AI's arms share an interest in what it means to speak of knowledge. If we really understand knowledge, then we ought to be able to write a computer program that implements that understanding. And if we do, how can we say that our computer program isn't doing essentially the same thing that makes us humans intelligent? As much as I love computer science and programming, my favorite course in graduate school was an epistemology course I took with Prof. Rich Hall. It drove straight to the core curiosity that impelled me to study AI in the first place. In the first week of the course, Prof. Hall laid out the notion of justified true belief, and from there I was hooked. A lot of AI starts with a naive feeling of this sort, whether explicitly stated or not. Doing AI research brings that feeling into contact with reality. Then things gets serious. It's all enormously stimulating. Ultimately Chapman left the field, disillusioned by what he saw as a fundamental limitation that AI's bag of tricks could never resolve. Even so, the questions that led him to AI still motivate him and his current work, which is good for all of us, I think. This essay brought back a lot of pleasant memories for me. Even though I, too, am no longer in AI, the questions that led me to the field still motivate me and my interests in program design, programming languages, software development, and CS education. It is hard to escape the questions of what it means to think and how we can do it better. These remain central problems of what it means to be human. -----