November 30, 2013 9:45 AM

The Magic at the Heart of AI

This paragraph from The Man Who Would Teach Machines to Think expresses a bit of my uneasiness with the world of AI these days:

As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them. Which is a bit like using a graphing calculator to do your high-school calculus homework -- it works great until you need to actually understand calculus.

I understand the desire to solve real problems and the resulting desire to apply opaque mathematics to large data sets. Like most everyone, I revel in what Google can do for me and watch in awe when Watson defeats the best human Jeopardy! players ever. But for me, artificial intelligence was about more than just getting the job done.

Over the years teaching AI, my students often wanted to study neural networks in much greater detail than my class tended to go. But I was more interested in approaches to AI and learning that worked at a more conceptual level. Often we could find a happy middle ground while studying genetic algorithms, which afforded them the magic of something-for-nothing and afforded me the potential for studying ideas as they evolved over time.

(Maybe my students were simply exhibiting Astrachan's Law.)

When I said goodbye to AAAI a few years ago, I mentioned Hofstadter's work as one of my early inspirations -- Gödel, Escher, Bach and the idea of self-reference, with its "intertwining worlds of music, art, mathematics, and computers". That entry said I was leaving AAAI because my own work had moved in a different direction. But it left unstated a second truth, which The Man Who Would Teach Machines to Think asserts as Hofstadter's own reason for working off the common path: the world of AI had moved in a different direction, too.

For me, as for Hofstadter, AI has always meant more than engineering a solution. It was about understanding scientifically something that seemed magical, something that is both deeply personal and undeniably universal to human experience, about how human consciousness seems to work. My interest in AI will always lie there.

~~~~~

If you enjoy the article about Hofstadter and his work linked above, perhaps you will enjoy a couple of entries I wrote after he visited my university last year:


Posted by Eugene Wallingford | Permalink | Categories: Computing

November 26, 2013 1:38 PM

Saying Thanks, and Giving Back

When someone asked Benjamin Franklin why he had declined to seek a patent for his famous stove, he said:

I declined it from a principle which has ever weighed with me on such occasions, that as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours.

This seems a fitting sentiment to recall as I look forward to a few days of break with my family for Thanksgiving. I know I have a lot to be thankful for, not the least of which are the inventions of so many others that confer great advantage on me. This week, I give thanks for these creations, and for the creators who shared them with me.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

November 25, 2013 2:56 PM

The Moment When Design Happens

Even when we plan ahead a bit, the design of a program tends to evolve. Gary Bernhardt gives an example in his essay on abstraction:

If I feel the need to violate the abstraction, I need to reconsider how to modify the boundaries to match that need, rather than violating the boundaries by crossing them.

This is the moment when design happens...

This is a hard design lesson to give students, because it is likely to click with them only after living with the consequences of violating the abstraction. This requires working with the same large program over time, preferably one they are building along the way.

This is one of the reasons I so like our senior project courses. My students are building a compiler this term, which gives them a chance to experience a moment when design happens. Their abstract syntax trees and symbol tables are just the sort of abstractions that invite violation -- and reward a little re-design.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 24, 2013 10:54 AM

Teaching Algorithms in 2014

This spring, I will be teaching the undergraduate algorithms course for first time in nine years, since the semester before I became department head. I enjoy this course. It gives both the students and me opportunities to do a little theory, a little design, and a little programming. I also like to have some fun, using what we learn to play games and solve puzzles.

Nine years is a long time in computing, even in an area grounded in well-developed theory. I will need to teach a different sort of course. At the end of this entry, I ask for your help in meeting this challenge.

Algorithms textbooks don't look much different now than they did in the spring of 2005. Long-time readers of this blog know that I face the existential crisis of selecting a textbook nearly every semester. Picking a textbook requires balancing several forces, including the value they give to the instructor, the value they give to the student during and after the course, and the increasing expense to students.

My primary focus in these decisions is always on net value to the students. I like to write my own material anyway. When time permits, I'd rather write as much as I can for students to read than outsource that responsibility (and privilege) to a textbook author. Writing my lecture notes in depth lets me weave a lot of different threads together, including pointers into primary and secondary sources. Students benefit from learning to read non-textbook material, the sort they will encounter as throughout their careers.

My spring class brings a new wrinkle to the decision, though. Nearly fifty students are enrolled, with the prospect a few more to come. This is a much larger group than I usually work with, and large classes carry a different set of risks than smaller courses. In particular, when something goes wrong in a small section, it is easier to recover through one-on-one remediation. That option is not so readily available for a fifty-person course.

There is more risk in writing new lecture material than in using a textbook that has been tested over time. A solid textbook can be a security blanket as much for the instructor as for the student. I'm not too keen on selecting a security blanket for myself, but the predictability of a text is tempting. There is one possible consolation in such a choice: perhaps subordinating my creative impulses to the design of someone's else's textbook will make me more creative as a result.

But textbook selection is a fairly ordinary challenge for me. The real question is: Which algorithms should we teach in this course, circa 2014? Surely the rise of big data, multi-core processors, mobile computing, and social networking require a fresh look at the topics we teach undergrads.

Perhaps we need only adjust the balance of topics that we currently teach. Or maybe we need to add a new algorithm or data structure to the undergraduate canon. If we teach a new algorithm, or a new class of algorithms, which standard material should be de-emphasized, or displaced altogether? (Alas, the semester is still but fifteen weeks long.)

Please send me your suggestions! I will write up a summary of the ideas you share, and I will certainly use your suggestions to design a better algorithms course for my students.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 21, 2013 3:06 PM

Agile Thoughts, Healthcare.gov Edition

Clay Shirky explains the cultural attitudes that underlie Healthcare.gov's problems in his recent essay on the gulf between planning and reality. The danger of this gulf exists in any organization, whether business or government, but especially in large organizations. As the number of levels grows between the most powerful decision makers and the workers in the trenches, there is an increasing risk of developing "a culture that prefers deluding the boss over delivering bad news".

But this is also a story of the danger inherent in so-called Big Design Up Front, especially for a new kind of product. Shirky oversimplifies this as the waterfall method, but the basic idea is the same:

By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work.

You may learn something, of course; you just aren't allowed to let it change what you build, or how.

Instead, waterfall insists that the participants will understand best how things should work before accumulating any real-world experience, and that planners will always know more than workers.

If the planners believe this, or they allow the workers to think they believe this, then workers will naturally avoid telling their managers what they have learned. In the best case, they don't want to waste anyone's time if sharing the information will have no effect. In the worst case, they might fear the results of sharing what they have learned. No one likes to admit that they can't get the assigned task done, however unrealistic it is.

As Shirky notes, many people believe that a difficult launch of Healthcare.gov was unavoidable, because political and practical factors prevented developers from testing parts of the project as they went along and adjusting their actions in response. Shirky hits this one out of the park:

That observation illustrates the gulf between planning and reality in political circles. It is hard for policy people to imagine that Healthcare.gov could have had a phased rollout, even while it is having one.

You can learn from feedback earlier, or you can learn from feedback later. Pretending that you can avoid problems you already know exist never works.

One of the things I like about agile approaches to software development is they encourage us not to delude ourselves, or our clients. Or our bosses.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Software Development

November 19, 2013 4:49 PM

First Model, Then Improve

Not long ago, I read Unhappy Truckers and Other Algorithmic Problems, an article by Tom Vanderbilt that looks at efforts to optimize delivery schedules at UPS and similar companies. At the heart of the challenge lies the traveling salesman problem. However, in practice, the challenge brings companies face-to-face with a bevy of human issues, from personal to social, psychological to economic. As a result, solving this TSP is more complex than what we see in the algorithms courses we take in our CS programs.

Yet, in the face of challenges both computational and human, the human planners working at these companies do a pretty good job. How? Over the course of time, researchers figured out that finding optimal routes shouldn't be their main goal:

"Our objective wasn't to get the best solution," says Ted Gifford, a longtime operations research specialist at Schneider. "Our objective was to try to simulate what the real world planners were really doing."

This is a lesson I learned the hard way, too, back in graduate school, when my advisor's lab was trying to build knowledge-based systems for real clients, in chemical engineering, aeronautics, business, and other domains. We were working with real people who were solving hard problems under serious constraints.

At the beginning I was a typically naive programmer, armed with fancy AI techniques and unbounded enthusiasm. I soon learned that, if you walk into a workplace and propose to solve all the peoples' problems with a program, things don't go as smoothly as the programmer might hope.

First of all, this impolitic approach generally creates immediate pushback. These are people, with personal investment in the way things work now. They tend to bristle when a 20-something grad student walks in the door promoting the wonder drug for all their ills. Some might even fear that you are right, and success for your program will mean negative consequences for them personally. We see this dynamic in Vanderbilt's article.

There's a deeper reason that things don't go so smoothly, though, and it's the real lesson of Vanderbilt's piece. Until you implement the existing solution to the problem, you don't really understand the problem yet.

These problems are complex, often with many more constraints than typical theoretical solutions have dealt with. The humans solving the problem often have many years of experience contributing to their approach. They have deep knowledge of the domain, but also repeated exposure to the exceptions and edge cases that sometimes confound theoretical solutions. They use heuristics that are hard to tease apart or articulate.

I learned that it's easy to solve a problem if you are solving the wrong one.

A better way to approach these challenges is: First, model the existing system, including the extant solution. Then, look for ways to improve on the solution.

This approach often gives everyone involved greater confidence that the programmers understand -- and so are solving -- the right problem. It also enables the team to make small, incremental changes to the system, with a correspondingly higher probability of success. Together, these two outcomes greatly increase the chance of human buy-in from the current workers. This makes it easier for the whole team to recognize the need for larger-scale changes to the process, and to support and contribute to an improved solution.

Vanderbilt tells a similarly pragmatic story. He writes:

When I suggest to Gifford that he's trying to understand the real world, mathematically, he concurs, but adds: "The word 'understand' is too strong--we are happy to get positive outcomes."

Positive outcomes are what the company wants. Fortunately for the academics who work on such problems in industry, achieving good outcomes is often an effective way to test theories, encounter their shortcomings, and work on improvements. That, too, is something I learned in grad school. It was a valuable lesson.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development

November 14, 2013 2:55 PM

Toward A New Data Science Culture in Academia

Fernando Perez has a nice write-up, An Ambitious Experiment in Data Science, describing a well-funded new project in which teams at UC Berkeley, the University of Washington, and NYU will collaborate to "change the culture of universities to create a data science culture". A lot of people have been quoting Perez's entry for its colorful assessment of academic incentives and reward structures. I like this piece for the way Perez defines and outlines the problem, in terms of both data science across disciplines and academic culture in general.

For example:

Most scientists are taught to treat computation as an afterthought. Similarly, most methodologists are taught to treat applications as an afterthought.

Methodologists here includes computer scientists, who are often more interested in new data structures, algorithms, and protocols.

This "mirror" disconnect is a problem for a reason many people already understand well:

Computation and data skills are all of a sudden everybody's problem.

(Here are a few past entries of mine that talk about how programming and the nebulous "computational thinking" have spread far and wide: 1 | 2 | 3 | 4.)

Perez rightly points out that the open-source software, while imperfect, often embodies the principles or science and scientific collaboration better than the academy. It will be interesting to see how well this data science project can inject OSS attitudes into big research universities.

He is concerned because, as I have noted before, are, as a whole, a conservative lot. Perez says this in a much more entertaining way:

There are few organizations more proud of their traditions and more resistant to change than universities (churches and armies might be worse, but that's about it).

I think he gives churches and armies more credit than they deserve.

The good news is that experiments of the sort being conducted in the Berkley/UW/NYU project are springing up on a smaller scale around the world. There is some hope for big change in academic culture if a lot of different people at a lot of different institutions experiment, learn, and create small changes that can grow together as they bump into one another.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

November 10, 2013 9:10 AM

You May Be a Teacher If ...

... you wake groggily at 5:30 on a Sunday morning. You lie in bed, half awake, as your mind begins designing a new class session for your compiler course. You never go back to sleep.

Before you rise, you have a new reading assignment, an opening exercise asking your students to write a short assembly language program, and two larger in-class exercises aimed at helping them make a good start on their compiler's run-time system.

This is a thorny topic. It's been bothering you. Now, you have a plan.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

November 09, 2013 12:25 PM

An Unusual Day

My university is hosting an on-campus day to recruit HS students and transfer students today. At a day like this, I usually visit with one or two potential majors and chat with one or two others who might be interested in a CS or programming class. All are usually men.

Today was unusual.

Eight people visited the department to learn about the major.

I spoke with three people who intend to major in other areas, such as accounting and physics, and want to take a minor in CS.

I spoke with a current English major here is set to graduate in May but now is thinking about employability and considering picking up a second degree in CS.

I spoke with three female students who are interested in CS. These include the English major and a student who has taken several advanced math courses at a good private school nearby, really likes them, and is thinking of combining math and CS in a major here.

The third is a high school freshman who has taken taken all the tech courses available at her schools, helps the tech teacher with the schools computers, and wants to learn more. She told me, "I just think it would be cool to write programs and make things happen."

Some recruiting days are better than others. This is one.


Posted by Eugene Wallingford | Permalink | Categories: General

November 04, 2013 2:41 PM

Those Silly Tests

I love this passage by Mark Dominus in Overlapping Intervals:

This was yet another time when I felt slightly foolish as I wrote the automated tests, assuming that the time and effort I spent on testing this trivial function would be time and effort thrown away on nothing -- and then they detected a real fault. Someday perhaps I'll stop feeling foolish writing tests for functions like this one; until then, many cases just like this one will help me remember that I must write the tests even though I feel foolish doing it.

Even excellent programmers feel silly writing tests sometimes. But they also benefit from writing them. Dominus was saved here by his test-writing habit, or by his sense of right and wrong.

Helping students develop that habit or that moral sense is a challenge. Even so, I rarely come across a situation where my students or I write or run too many tests. I regular encounter cases where we write or run too few.

Dominus's blog entry also a great passage on a larger lesson from that coding experience. In the end, his clever solution to a tricky problem results not from "just thinking" but from deeper thought: from "applying carefully-learned and practiced technique". That's an important form of thinking, too.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning