TITLE: First Model, Then Improve AUTHOR: Eugene Wallingford DATE: November 19, 2013 4:49 PM DESC: ----- BODY: Not long ago, I read Unhappy Truckers and Other Algorithmic Problems, an article by Tom Vanderbilt that looks at efforts to optimize delivery schedules at UPS and similar companies. At the heart of the challenge lies the traveling salesman problem. However, in practice, the challenge brings companies face-to-face with a bevy of human issues, from personal to social, psychological to economic. As a result, solving this TSP is more complex than what we see in the algorithms courses we take in our CS programs. Yet, in the face of challenges both computational and human, the human planners working at these companies do a pretty good job. How? Over the course of time, researchers figured out that finding optimal routes shouldn't be their main goal:
"Our objective wasn't to get the best solution," says Ted Gifford, a longtime operations research specialist at Schneider. "Our objective was to try to simulate what the real world planners were really doing."
This is a lesson I learned the hard way, too, back in graduate school, when my advisor's lab was trying to build knowledge-based systems for real clients, in chemical engineering, aeronautics, business, and other domains. We were working with real people who were solving hard problems under serious constraints. At the beginning I was a typically naive programmer, armed with fancy AI techniques and unbounded enthusiasm. I soon learned that, if you walk into a workplace and propose to solve all the peoples' problems with a program, things don't go as smoothly as the programmer might hope. First of all, this impolitic approach generally creates immediate pushback. These are people, with personal investment in the way things work now. They tend to bristle when a 20-something grad student walks in the door promoting the wonder drug for all their ills. Some might even fear that you are right, and success for your program will mean negative consequences for them personally. We see this dynamic in Vanderbilt's article. There's a deeper reason that things don't go so smoothly, though, and it's the real lesson of Vanderbilt's piece. Until you implement the existing solution to the problem, you don't really understand the problem yet. These problems are complex, often with many more constraints than typical theoretical solutions have dealt with. The humans solving the problem often have many years of experience contributing to their approach. They have deep knowledge of the domain, but also repeated exposure to the exceptions and edge cases that sometimes confound theoretical solutions. They use heuristics that are hard to tease apart or articulate. I learned that it's easy to solve a problem if you are solving the wrong one. A better way to approach these challenges is: First, model the existing system, including the extant solution. Then, look for ways to improve on the solution. This approach often gives everyone involved greater confidence that the programmers understand -- and so are solving -- the right problem. It also enables the team to make small, incremental changes to the system, with a correspondingly higher probability of success. Together, these two outcomes greatly increase the chance of human buy-in from the current workers. This makes it easier for the whole team to recognize the need for larger-scale changes to the process, and to support and contribute to an improved solution. Vanderbilt tells a similarly pragmatic story. He writes:
When I suggest to Gifford that he's trying to understand the real world, mathematically, he concurs, but adds: "The word 'understand' is too strong--we are happy to get positive outcomes."
Positive outcomes are what the company wants. Fortunately for the academics who work on such problems in industry, achieving good outcomes is often an effective way to test theories, encounter their shortcomings, and work on improvements. That, too, is something I learned in grad school. It was a valuable lesson. -----