TITLE: AI's Biggest Challenges Are Still To Come AUTHOR: Eugene Wallingford DATE: May 27, 2018 10:20 AM DESC: ----- BODY:
Semantic Information Processing on my bookshelf
A lot of people I know have been discussing last week's NY Times op-ed about recent advances in neural networks and what they mean for AI. The article even sparked conversation among colleagues from my grad school research lab and among my PhD advisor's colleagues from when he was in grad school. It seems that many of us are frequently asked by non-CS folks what we think about recent advances in AI, from AlphaGo to voice recognition to self-driving cars. My answers sound similar to what some of my old friends say. Are we now afraid of AI being able to take over the world? Um, no. Do you think that the goals of AI are finally within reach? No. Much remains to be done. I rate my personal interest in recent deep learning advances as meh. I'm not as down on the current work as the authors of the Times piece seem to be; I'm just not all that interested. It's wonderful as an exercise in engineering: building focused systems that solve a single problem. But, as the article points out, that's the key. These systems work in limited domains, to solve limited problems. When I want one of these problems to be solved, I am thankful that people have figured out how to solve and make it commercially available for us to use. Self-driving cars, for instance, have the potential to make the world safer and to improve the quality of my own life. My interest in AI, though, has always been at a higher level: understanding how intelligence works. Are there general principles that govern intelligent behavior, independent of hardware or implementation? One of the first things to attract me to AI was the idea of writing a program that could play chess. That's an engineering problem in a very narrow domain. But I soon found myself drawn to cognitive issues: problem-solving strategies, reflection, explanation, conversation, symbolic reasoning. Cognitive psychology was one of my favorite courses in grad school in large part because it tried to connect low-level behaviors in the human brain connected to the symbolic level. AlphaGo is exceedingly cool as a game player, but it can't talk to me about Go, and for me that's a lot of the fun of playing. In an email message earlier this week, my quick take on all this work was: We've forgotten the knowledge level. And for me, the knowledge level is what's most interesting about AI. That one-liner oversimplifies things, as most one-liners do. The AI world hasn't forgotten the knowledge level so much as moved away from it for a while in order to capitalize on advances in math and processing power. The results have been some impressive computer systems. I do hope that the pendulum swings back soon as AI researchers step back from these achievements and builds some theories at the knowledge level. I understand that this may not be possible, but I'm not ready to give up on the dream yet. -----