When easy math turns hard
By Robert pool
Some problems cannot be solved in reasonable time. Just which ones is a question for the field of computational complexity.
In 1959, a 24-year-old mathematician with a freshly minted Ph.D. from the Harvard Computational Laboratory took a job at IBM's Yorktown Heights Research Center. The young man, who would go on to become one of the pioneers of the emerging field of computer science and eventually be awarded the prestigious Turing Award for his contributions to the area, was assigned to a group trying to automate the design of computer switching circuits. Their goal was to write a computer program that, given a particular logic statement, would find a circuit that encoded the statement using a minimum number of transistors and other logic elements.
"The program we designed contained many elegant shortcuts and refinements," the mathematician, Richard Karp, recalled in his 1985 Turing Award lecture, "but its fundamental mechanism was simply to enumerate the possible circuits in order of increasing cost." The approach seemed reasonable enough at the time, but, as Karp and his co-workers would discover, it had a fatal flaw:
"The number of circuits that the program had to comb through grew at a furious rate as the number of input variables increased, and, as a consequence, we could never progress beyond the solution of toy problems."
Karp, who spent a decade at IBM before moving to the University of California at Berkeley, had learned firsthand the pitfalls of trying to solve a problem on the computer by simply plunging in. If the problem is small and uncomplicated, such as designing switching circuits for short, simple logic statements, it can be handled successfully in a variety of ways, and the approach one chooses is not critical. But as problems grow larger and more complex, a clever technique can make the difference between finding a solution and endlessly spinning one's wheels. And in some cases, as Karp and others were surprised to discover, it does not seem to matter how clever one isas the problem grows in number of variables or in complexity, the time it takes to solve increases exponentially, demanding years, decades, or longer on even the most powerful computers. Believing that they needed a good theoretical understanding of such phenomena before they could use a computer effectively, Karp and a number of his contempora
ries set out to study computation in great detail. In the process, they created a collection of fast, efficient algorithmspatterned series of steps for solving a particular sort of problemand calculated how quickly or slowly their algorithms could be expected to work. The field they invented is now called computational complexity. It is the creation of hundreds, if not thousands, of computer scientists over the past several decades, and it underlies much of modern computer work. The algorithms that theoreticians develop in their basic research are regularly put to work in software for tasks such as designing telephone networks or scheduling flights for airlines, and a theoretical understanding of when and how problems grow computationally difficult helps software developers avoid the sort of intractability that caught Karp by surprise in 1959.
In spite of this success, or perhaps because of it, a growing number of researchers have begun in recent years to look outside the traditional bounds of computational complexity in the hope that new approaches will yield new insights and, eventually, more useful computational techniques. A number of computer scientists have, for instance, studied the computation of approximate solutions to problems that are too difficult to solve exactly. Ordinarily, researchers have limited themselves to questions about the computation of exact answers.
Among these new paradigms, two of the most intriguing are being carried forward in the same halls where Richard Karp once struggled to automate circuit-design software. At IBM's Thomas J. Watson Research Center, Scott Kirkpatrick and Mike Shub are each working on programs of study that, if successful, could help shape the agenda for computational theory for the next several decades. Although the two have different visions about where the field should go, they are united in their sense that the future of computation demands fresh theoretical approaches.
Trained as a physicist, Kirkpatrick says he tries to apply insights and concepts from physics for use in understanding problems from computer science. Although the two fields might seem to have little in common, Kirkpatrick, in collaboration with Bart Selman at Cornell University, has found surprising parallels that promise to make both fields richer.
Consider phase transitions, a standard concept in thermodynamics and other areas of physics. When water freezes or boils, physicists refer to that transformation as a phase transition. The gas, liquid, and solid phases of water or another material are relatively easy to characterize, but the behavior of the material while the phase is changing is much harder to understand. Kirkpatrick has found that many problems arising in computer science exhibit a similar patternthey are easy to solve as long as they are in one "phase" or another, but they become exceptionally difficult on the cusp between two phases.
Among the problems that exhibit such a transition is the classic satisfiability (SAT) problem, which arises in practical situations such as airline scheduling but which is better known in its guise as a party-list puzzle. Mary wants to hold a party, but she has to be careful who she invites and who she doesn't. Jack and Ted don't get along, so she has to leave one of them off the list. If she invites Jack, she has to also invite his girlfriend, Jill. She doesn't want to ignore Ted's family altogether, so she must make sure to send an invitation either to Ted or to his brother, Tim. And so on. If there are only a few friends to consider, Mary can simply list all the possible combinationsJack, yes; Ted, no; Jill, yes; etc.and look for one that satisfies all the constraints. But the number of combinations grows quickly. With 10 friends, more than a thousand combinations are possible; with 20, more than a million; and with 30, more than a billion. So if Mary is looking to host a large party, a simplem
inded search approach will settle on a guest list only after most of her friends have long forgotten her.
Fortunately for Mary, computer scientists years ago came up with an algorithm that will crank out an acceptable guest list, or else prove that it can't be done, in reasonable time. The only catch is that Mary's constraintssuch as "invite Ted or Tim"must involve only two people at once. If she wants to specify constraints with three people at a time"invite Ted, or invite Tim, or leave Jill out"she has moved from what computer scientists call a 2-SAT problem to a 3-SAT problem and has unwittingly crossed a boundary into an area where problems cannot always be solved.
There is no known algorithmand, computer scientists believe, no possible algorithmthat can guarantee a solution to a 3-SAT problem in a reasonable amount of time. Here, "reasonable" means that the problem does not get too hard too quickly as the number of people under consideration grows. For the 2-SAT problem, the time it takes to get an answer is proportional to the number of people. Double the number of people under consideration, and it may take twice as long to find a guest list, but no longer. In the case of 3-SAT, however, the time needed to solve the problem can grow exponentially as the number of potential guests grows. A modest size guest list could require the rest of Mary's life to work through all the options, even with the best algorithm on the fastest supercomputer. (This is actually a worst-case scenario. Most 3-SAT problems can be solved fairly easily, and only a small percentage are too difficult to handle. But computer scientists have traditionally looked for algorithms that sol
ve all instances of a given type of problem, not just some or most of them.)
Computer scientists group the 2-SAT problem and others that can always be solved in a reasonable time into a class called P. The "P" stands for polynomial time, which refers to the fact that as the number of variables in such a problem increases, the time it takes to solve the problem increases in proportion to the number of variables, perhaps raised to some power: squared, cubed, and so forth. By contrast, the 3-SAT problem, along with many other apparently intractable problems, seems to become exponentially more difficult as the number of variables increases. Computer scientists call such problems NP-complete. As far as anyone can tell, it is impossible to develop an algorithm that will always solve an NP-complete problem in reasonable time, although no one has been able to prove that such algorithms do not exist. What is known is that if an efficient algorithmthat is, one that works in an amount of time proportional to some power of the number of variablescan be found for just one NP-complete p
roblem, such as 3-SAT, it will point the way to efficient algorithms for all the rest. Fifteen years ago in his Turing lecture, Karp pointed to this question about NP-complete problems as the major unsolved issue in computational complexity, and today it remains unsettled.
So Kirkpatrick has approached the issue from a completely different direction. Instead of looking for algorithms, he studies exactly why and how some problems become intractable as variables increase. Several years ago, for example, he and Selman analyzed how the difficulty of solving a party-list problem depended on the number of potential guests and the number of constraints. Not surprisingly, he found, when there are many potential guests and few constraints, many different lists will work and it's easy to find one. Conversely, when there are few guests and many constraints, there is generally no list that will meet all the constraints, and that, too, is usually easy to prove. The surprise is what happens between the two extremes, as the ratio of constraints to guests moves from low to high. As Kirkpatrick and Selman showed in a paper published in Science, the number of solutions does not decrease gradually, as one might expect, but instead drops off as sharply as if someone had flipped a switch. Th
is sudden shift, Kirkpatrick says, has many similarities to a phase transition in physics.
Most problems within this transitional area are difficult to solve, Kirkpatrick says. "At the phase
boundary there are lots of solutions, but they're far apart and hard to find." There might, for
instance, be several dozen solutions scattered among many billions of combinations of variables,
which must be combed through one at a time to find one that solves the problem.
The important point, Kirkpatrick explains, is that one knows where most of the difficult problems lie. As long as one stays outside the transition area, NP-complete problems are almost always easy to solve. By contrast, problems within the transition area almost always defy quick resolution. Knowing ahead of time which problems are likely to be intractable has great practical value, Kirkpatrick notes, because it allows someone to anticipate and sidestep difficulties. If a problem lies in a part of the phase diagram where solutions are easy to find, one can apply an algorithm to the problem with confidence it will work. If the problem lies in the transitional area, however, an algorithm may well not be able to find an answer, at least not in a reasonable time. "You want to know as quickly as possible when you're trying to do the impossible," Kirkpatrick says. In such cases, a person can settle for an approximate solution instead. For a party puzzle, this would mean removing a few of the constraintsgo ahe
ad and invite both Jack and Ted, and hope they don't make a sceneand finding an invitation list that satisfied the modified set of constraints.
But the existence of this phase transition is only part of the explanation
for why 3-SAT problems become intractable, Kirkpatrick says. The 2-SAT problem has a similar sort of transition, yet problems at its boundary do not become unsolvable. To understand the difference, he and several collaborators took a closer look at what happens at the phase boundary.
Their analysis, published last summer in Nature, concluded that SAT problems behave in a way strikingly similar to spin glasses, a type of magnetic material much analyzed by physicists. In spin glasses, each atom behaves as a tiny bar magnet that can point up or down and tries to align itself with other nearby atoms, pointing in either the same or the opposite direction. In particular, Kirkpatrick and his collaborators found that at the transition point in 3-SAT but not 2-SAT problems there is an abrupt appearance of "backbone" sets: groups of conditions that must be met for every solution. Any party list that will solve the problem must include Jim, say, and exclude Tim and Jill. This same clustering of solutions appears in spin glasses, where it is not possible for all pairs of atoms to have their spins aligned in the way they would prefer, and the spin glass settles into a state in which there are a minimum number of frustrated atoms; in general, many of these low-energy states exist, and they share
a backbone in which certain groups of spins are always rigidly aligned with respect to one another.
It is this clustering of solutions that makes solutions much harder to find with standard search algorithms, Kirkpatrick says. Now instead of a reasonable number of solutions scattered about almost randomly, so that one has some hope of stumbling across one of them, the solutions are concentrated in only a few places, and an algorithm might have to look through most of the solution space before zeroing in on the area where many solutions are clustered. The algorithm might, for instance, spend a great deal of time checking party lists that include Jill when in fact all of the solutions to the puzzle leave her off.
The sudden appearance of these backbones at the transition point seems to explain the intractability of 3-SAT problems, Kirkpatrick says, but recognizing their existence may also point the way to more efficient algorithms. By first identifying the backbone set and then finding which of the remaining guests should be invited or left out, it might be possible to settle on a party listperhaps an approximate one, with a few conditions not metmuch faster than with current methods. Since other NP-complete problems also sport backbones, this could eventually point to a general method for improving algorithms for these refractory problems. Indeed, Kirkpatrick says, these ideas are already being used in solving such things as large "traveling salesman" problems, in which the object is to find the shortest route through a number of cities or points in a network. "The results are quite encouraging," he says, "but I can't claim that we are in all the textbooks yet."
In another Watson office a stone's throw from Kirkpatrick's, Shub is tackling computation from a very different angle. Traditionally, he notes, computer scientists have focused on algorithms dealing with discrete numbersthat is, with sets of numbers that are not continuous but rather have gaps or jumps between one number and the next. The set of integers, or whole numbers, is discrete, as is the simple Boolean set consisting of two values, 0 and 1, or true and false, or yes and no. There is, of course, good reason for this focus. Many of the problems that arise in computer science are by their nature discrete, such as the satisfiability problem with its "Jack, yes; Ted, no; Jill, yes" types of solutions. Furthermore, today's digital computers are discrete machines, representing data as strings of 0s and 1s.
But as Shub points out, physicists, chemists, engineers and other physical scientists confront problems of a very different sort. Computing the trajectory of an asteroid, determining the three-dimensional structure of a protein molecule, calculating the maximum load for a bridgeall of these are continuous in nature; according to the models that scientists use to represent reality, things like space, time and mass can be divided into smaller and smaller pieces without ever losing their smooth quality. Thus physical scientists think in terms of the real numbersall the numbers on a continuous number lineinstead of sets of discrete numbers, such as the Boolean set or the set of whole numbers. Over the years, scientists and mathematicians have developed their own algorithms to find solutions to such continuous problems. One of the best known is Newton's method, a technique that every calculus student learns for finding where the value of a function is equal to zero. But there are countless others
. In simple cases, the algorithms can be carried out with pencil and paper; the more complex calculations are typically carried out on a computer.
This field, called numerical analysis, represents "a whole other culture of computing," says Lenore Blum, a mathematician at Carnegie-Mellon University and a collaborator with Shub. Unlike the classical discrete theory of computation, numerical analysis has no history of a theoretical foundation for analyzing the various algorithms to see how well they work and how much computer time they demand as the problems get larger. "Plenty of books will give you algorithms for continuous problems," Blum says, but there has been no theoretical framework for understanding such practical issues
as how quickly a solution will be found or how likely a problem is to prove intractable.
It is not enough, Blum says, simply to apply results from standard computational complexity to algorithms for continuous problemseven though, as a practical matter, continuous numbers are always approximated as discrete numbers when the calculations are performed on a computer. "If you tried to reduce Newton's method to a discrete model," she says, "you would lose all the structure." The algorithm, which involves calculating the slope of the function being analyzed, depends in a fundamental way on the continuous nature of the real numbers, and no discrete algorithm could capture its essence. "We need a model of computation that can allow the underlying domain to be the real numbers."
And that's what she and Shub, with a couple of international collaborators, have set out to do. In articles and a book written with Steve Smale of the City University of Hong Kong and Felipe Cucker of Universitat Pompeu Fabra in Barcelona (building on work done by IBM Fellow Shmuel Winograd and others), they have laid the groundwork for a theory of computational complexity for continuous instead of discrete calculations. The work began in the late 1980s, when Blum and Smale were academic visitors at Watson. "Our objective has been to study problems from the point of view of analyzing the total cost of solving them," Shub says. In essence, they are setting out to do for continuous computation what Karp and his colleagues did for discrete computation 40 years ago: establish a rigorous theoretical basis for understanding how well algorithms work and what their ultimate limits might be. And just as the study of computational complexity has led to new and improved algorithms for solving discrete problems, Shub and
his co-workers believe that their program too may someday underlie advances in the computation of answers to problems involving the real numbers.
They can point to some early successes that would not have been possible with traditional computational complexity. One achievement involves the Mandelbrot set, discovered at Watson by Benoit Mandelbrot, the father of fractal geometry. Perhaps the best-known example of a fractal, the setwhen seen in its entiretyresembles a chubby valentine with tiny balls clinging to its sides like lint, but zooming in on the set's boundary reveals an infinitely complex outline. No matter how great the magnification10, a thousand, a million times or morethe intricate boundary looks very much the same. This self-similarity is the hallmark of a fractal, and it also makes the Mandelbrot set difficult to analyze with discrete numbers. One of the most basic questions concerning the Mandelbrot set is whether some algorithm could determine if a point near the boundary of the set lies inside the set or outside it. Using continuous computational theory, Shub and his colleagues have proved that the answer is no:
no algorithm can make that determination in a finite amount of time for every point. It is a question that had never been answered with the standard discrete computational theory. Indeed, Shub says, it had proved difficult even to know how to ask the question as long as the computer was assumed to be working with discrete numbers.
Shub's group has also developed an algorithm to find a solution to a system of polynomial equations, the sort of equations that turn up regularly in different areas of science. The algorithm, a generalization of Newton's method, is much faster than previous methods and will always find a solution in a reasonable amount of time except for a few isolated cases. Perhaps more important, the researchers have analyzed the algorithm and can say with precision how much computing time it will take to reach a solution. This is something new for algorithms that deal with continuous numbers.
In a more theoretical vein, the group has set out to analyze continuous algorithms in a way that is parallel to computational complexity theory for discrete numbers. They, too, define two classes of problemsP (problems that can be solved in a reasonable amount of time) and NP (problems that can be checked in a reasonable amount of time)and have developed a continuous version of NP-complete problems, such as solving a system of equations involving variables raised to the fourth power. As with classical computational complexity theory, Shub and colleagues believe that their own NP-complete problems will prove to be intractablethat is, it will be impossible to find an algorithm that will solve every instance of such a problem in finite time. On the other hand, Shub says, for many types of problem it will probably be possible to find algorithms that work on all but a few cases.
Eventually, Shub says, the work he and the others are doing might even help crack some of the problems in traditional computation complexity research. The theorems they prove for continuous numbers often correspond to theorems for discrete numbersindeed, Shub says they often look to the discrete realm for guidanceso their results might point to some promising new approaches in discrete computations.
For now, neither Shub's nor Kirkpatrick's work has advanced much beyond theory, so it is impossible to know to what extent their work will
find practical applications. Still, if the past 40 years of research on computational complexity tells us anything, it is that a better theoretical understanding of computation will eventually
pay off in better algorithms, a better grasp of when they can be used profitably, and an improved ability to solve many of the problems that computers are asked to solve.
Robert Pool is a freelance science and technology writer based in Tallahassee, Florida.
His most recent book is Beyond Engineering: How Society Shapes Technology.