Strategic Directions for
Research in
Theory of Computing

September 23, 1996

Anne Condon, University of Wisconsin
Faith Fich, University of Toronto
Greg N. Frederickson, Purdue University
Andrew V. Goldberg, NEC Research Institute
David S. Johnson, AT&T Bell Laboratories
Michael C. Loui, University of Illinois at Urbana-Champaign
Steven Mahaney, DIMACS
Prabhakar Raghavan, IBM Almaden Research Center
John Savage, Brown University
Alan Selman, SUNY at Buffalo
David B. Shmoys, Cornell University

Abstract. This report focuses on two core areas of theory of computing: discrete algorithms and computational complexity theory. The report reviews the purposes and goals of theoretical research, summarizes selected past and recent achievements, explains the importance of sustaining core research, and identifies promising opportunities for future research. Some research opportunities build bridges between theory of computing and other areas of computer science, and other science and engineering disciplines.

Categories and Subject Descriptors: F.0 [Theory of Computation]: General, F.1.0 [Computation by Abstract Devices]: General, F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems.

General Terms: Algorithms, Experimentation, Theory, Verification

I. Introduction

Theory of computing is the scientific study of the fundamental nature of computation. The goal is to determine what can be computed efficiently: for specific computational problems, researchers endeavor to devise efficient algorithms or to prove that no efficient algorithm exists. Theoretical research yields concepts and principles that provide the scientific basis for the design and construction of computer software and hardware, just as linear system theory provides the basis for the design of an aircraft's control systems.

This report focuses on two core areas of theoretical computer science, namely, discrete algorithms and computational complexity theory. Other reports from the Strategic Directions workshop address theoretical research in computational geometry, in concurrency, in programming language semantics, and in formal methods.

Research in theory of computing began in the 1930s, when investigations by logicians produced computability theory. With the emergence of higher level languages in the 1950s, attention turned to the parsing and compiling of programs, leading to studies of automata and formal languages. During the 1960s and 1970s, researchers developed fundamental data structures and discrete algorithms, and the NP-completeness concept. All of these topics occupy a central place in computer science curricula, for they are essential for the education of computing professionals.

The scope of theory of computing has changed over the last sixty years, and it continues to change. Within the core areas of algorithms and complexity theory, research has branched into new directions. Section III.A summarizes several significant achievements of this research and explains the importance of sustaining core research in theory of computing.

Furthermore, concepts and techniques from theory of computing have been applied successfully to problems in other areas of computer science, and in other science and engineering disciplines. Examples appear in Section III.B. We hope that this report will encourage researchers outside theory of computing to collaborate with theoreticians on problems of mutual interest.

II. Purposes, Goals, and Methods

II.A. Scientific Goals

Theoretical computer scientists seek to understand computational phenomena, the expressibility of languages, the design and performance of algorithms, and general limits on computation. Thus, they ask what is computation, what can be computed, how it can be done, and at what cost. In this quest, they use formal models, methods of analysis, and some experimentation. They learn from and contribute to practice. Finally, they seek to expand the core activities of the field to better address hard computational problems.

II.B. Core Activities

II.B.1. Models of Computation

Models of languages and computers are developed for three purposes: An enormous variety of models has been formulated, frequently for more than one of these purposes. General models of computation, developed to study fundamental laws, include the Turing machine, the finite automaton, the logic circuit, and recently, the quantum computer. Ad hoc models, defined to understand specific phenomena, include memory hierarchy models for the study of caching. Models for algorithm design and analysis include the relational database model, the parallel random access machine (PRAM), and networks of synchronous or asynchronous, closely or remotely coupled computers.

II.B.2. Design and Analysis of Algorithms

Algorithms are designed for basic discrete computational problems, such as sorting large files and determining shortest paths in networks. Examples of basic problems that have recently yielded optimal (linear-time) algorithms are finding a minimum spanning tree, finding a polygon triangulation, and finding a shortest path tree in planar graph. Frequently the design of a new algorithm involves new data structures, which find applications in other algorithms. For example, the heap was introduced as part of a sorting algorithm. Subsequently, heaps and their variants have been cornerstones of many sophisticated algorithms.

The performance of algorithms is assessed through both analysis and experimentation, which have complementary strengths and weaknesses. Asymptotic analysis, the study of algorithmic performance on large problem instances, helps explain the inherent complexity of a problem. Some simple algorithms have been shown to be efficient only through deep mathematical analysis. Algorithmic performance for a given application is often evaluated through experimentation on representative data sets.

II.B.3. Fundamental Laws of Computation

Just as engineers need to understand laws of physics, computer scientists need to understand the laws of computation to make best use of computing technology. Some fundamental laws are general simulations that show how to implement algorithms designed on abstract models. For example, PRAM algorithms can be implemented on constant-degree multiprocessor networks with modest overhead.

Some fundamental laws are impossibility results that state limits on the power of computers. For example, in general, no algorithm can examine a program and its input and determine whether it will halt. No sorting algorithm uses fewer than $\log_2 (n!) = O(n \log n)$ comparisons in the worst case. No deterministic protocol for agreement among unreliable processors that tolerates $t$ failures requires fewer than $t+1$ rounds. Sometimes, lower bounds on computation time or storage space take the form of quantitative tradeoffs: if the space is limited, then the time must be large, and vice versa. Hash tables exhibit this tradeoff between space and time.

Although lower bounds on time have been established for some computational problems, there is a large class of problems, including many ubiquitous optimization problems, that appear to be difficult to solve exactly, but for which no nontrivial lower bounds are known: the NP-complete problems [Cook, 1971; Karp, 1972]. In essence, a problem is in the class P if its solution can be found efficiently, in polynomial time, and a problem is in the class NP if its solution can be verified efficiently. The NP-complete problems are the problems in NP that are the hardest to solve, as explained in more detail in Section III.A.2. The P vs. NP question asks whether P = NP, that is, whether problems whose solutions can be verified efficiently can a fortiori be solved efficiently.

More than a thousand computational problems, arising in many fields of science and engineering, are NP-complete [Garey and Johnson, 1979]. Experience over several decades indicates that efficient solutions for NP-complete problems are unlikely to exist. Consequently, proving a problem to be NP-complete is generally accepted as a proof of its difficulty. The difficulty of finding efficient solutions to NP-complete problems suggests seeking other methods, such as approximation algorithms and probabilistic algorithms, to solve the interesting instances of the problems as best one can. For example, scheduling jobs is a critical task in manufacturing, and the NP-completeness of many scheduling problems helps practitioners justify the implementation of suboptimal solutions.

II.C. Experimentation

Computational models abstract from a real situation that is too complex to be analyzed. Models such as the PRAM model of parallel computation often make strong assumptions about the computational environment, such as the bandwidth and latency of communication networks. Models may neglect factors that turn out to be important in practice. When important conclusions are based on such assumptions, these assumptions should be tested empirically. Positive empirical results can demonstrate the relevance of a model to an application area, and negative empirical results can show limitations of a model. Empirical data can suggest new theoretical results and directions, and can facilitate practical applications of theoretical ideas.

For example, Karmarkar's theoretical results about his interior point algorithm for linear programming [1984] did not attract much attention in the practical optimization community until he produced results suggesting that his algorithm was practical. This effort led to a flourishing of research, both theoretical and empirical, on interior point algorithms. Today, all major commercial linear programming packages contain interior point routines in addition to implementations of the traditional Simplex algorithm. Because of the widespread use of linear programming, the interior-point method has significantly influenced many areas of science and engineering.

Not all theoretical results can or must undergo empirical evaluation. Conceptual results, such as lower bounds, computability proofs, and reductions between problems, cannot be verified experimentally. Other results may be practical only in combination with future technological or theoretical developments.

Nevertheless, experimental work is important. It is an entirely appropriate activity for theoreticians with interests and talents in that direction. A deep theoretical understanding of an algorithm and its background can be crucial to understanding key implementation issues and interpreting the results of experiments. Some examples of recent experimental work by theoreticians are the DIMACS Implementation Challenges [Johnson and McGeoch, 1993], the LEDA project [Näher and Uhrig, 1996], implementations of algorithms for network optimization [Goldberg, 1994], and experimental work [Goudreau et al., 1995; Miller, 1993] on the BSP model [Valiant, 1990]. The recently introduced ACM Journal on Experimental Algorithmics provides an important forum for experimental work. Better coordination of experimental research on algorithms, introduction of carefully designed data sets for many fundamental problems, and formalization of standards for evaluation of experimental work on algorithms [Barr et al., 1995; McGeoch, 1996] bespeak the growing maturity of the area.

III. Achievements and Strategic Directions

III.A. Core Activities

III.A.1. Reasons for Core Research

Basic research in computer science, like basic research in any science, focuses on intrinsically important, fundamental questions. For example, what are the limits to the power of parallelism and randomization in algorithms? How does the representation of information affect the efficiency of a computation? Progress on these questions requires freedom from demands for immediate applicability or for contributions to specific missions.

Unfettered research on fundamental questions has yielded surprising results. New notions of mathematical proof emerged from research on randomized algorithms for primality testing that may make mistakes. Later, interactive proofs, zero-knowledge proofs, and probabilistically checkable proofs (PCPs) were defined. Work on PCPs led directly to astonishing results on the existence of approximation algorithms for optimization problems on [Feige et al., 1996; Lund and Yannakakis, 1994], and to a spectacular approximation algorithm for the traveling salesman problem in the plane [Arora, 1996]. Studies of expressibility in logic led to the unexpected discovery that nondeterministic space classes are closed under complementation [Immerman, 1988; Szelepcsenyi, 1988]; this discovery was a major advance in understanding the power of nondeterminism, and a possible step toward resolving the P vs. NP question.

Theoretical research has also led to new applications and is often ahead of technology. Just as the Turing machine anticipated the first general purpose programmable computer, so, more recently, has theoretical research on security produced digital signatures for authentication and public-key encryption for privacy. These research contributions now support a growing, multi-billion dollar industry in computer security; see Section III.B.2.

It is essential to nurture and sustain research in the core areas of theory of computing. Because the goals of this research are long term, the payoff may not be apparent immediately. Thus, support for core theoretical research represents a long term investment for the common good.

III.A.2. Models and Complexity Theory

Theoretical computer science contributes universal and specialized models for machines, languages, programs, and protocols, and methods of analysis relevant to these models. Its theorems describe the expressibility of language models (formal languages and semantics), place limits on the power of computational models, and classify the inherent complexity of problems (computational complexity). The results of computational complexity We briefly highlight some important ideas developed in computational complexity. Textbooks by Balcazar et al. [1998, 1990], by Bovet and Crescenzi [1994], and by Papadimitriou [1994] provide comprehensive treatments of the subject.

In complexity theory, reductions between problems are used to establish the existence of computationally difficult problems. A problem $\Pi_0$ is NP-hard if every problem $\Pi$ in NP efficiently reduces to $\Pi_0$, in the sense that $\Pi$ can be solved by an algorithm that makes subroutine calls to an algorithm that solves $\Pi_0$. NP-hard problems need not belong to NP. A problem $\Pi_0$ is NP-complete if $\Pi_0$ is NP-hard and $\Pi_0$ is in NP. Theoreticians have studied other kinds of reductions between problems too. One might say that problem $\Pi_1$ reduces to problem $\Pi_2$ if each instance of $\Pi_1$ can be efficiently transformed into an instance of $\Pi_2$. Reductions of the former kind are called Turing reductions, and those of the latter kind are called many-one reductions. All problems that are known to be complete for NP under Turing reductions are already complete under many-one reductions; it remains an open problem whether this is true in general.

Complexity theory substantiates the security of modern cryptographic schemes, explaining why different representations of the same information--the original plaintext and the encrypted ciphertext--are not effectively the same. That is, complexity theory characterizes the difficulty of computational problems, and hence the difficulty of breaking a cryptosystem. Cracking a public-key cryptosystem [Diffie and Hellman, 1976] or related system would imply the existence of an efficient algorithm for a computational problem ``known'' to be intractable. Underlying all cryptosystems are the concepts of one-way functions and trapdoor functions [Impagliazzo, 1989; Yao, 1982].

In recent years, randomization has played an increasingly important role in computation [Motwani and Raghavan, 1995]. Frequently, algorithms with random choices provide efficient solutions to computational problems that are otherwise unavailable [Rabin, 1976]. Computational heuristics that use randomization--Monte Carlo methods--have been used for many years, but theoretical studies have placed randomized algorithms on sound intellectual footings: several definitions of randomness have been compared, and various kinds of errors have been distinguished [Gill, 1977]. The power and nature of randomization remain to be fully discovered.

The theory of average-case complexity is being developed [Gurevich, 1991; Levin, 1986; Wang, to appear] for hard computational problems. The goal is to determine whether they remain hard on average. It has been shown that some combinatorial problems are NP-complete with respect to reductions that preserve distributions. This work is laying a foundation for the classification of problems according to average-case complexity, analogous to the traditional classification according to worst-case complexity.

One of the original motivations for NP-completeness was to explain the apparent difficulty of making simple inferences in propositional logic: the sort of inferences required in artificial intelligence reasoning systems. The theory suggests that many logical tautologies have no short proof, which is a question of practical as well as profound mathematical and philosophical interest. For many proof systems used in practice, long proofs are required for some tautologies [Haken, 1985].

Many concrete computational models have been introduced to study these questions. For example, progress in proof systems has been made possible through the study of logic circuits. Logic circuits, although very simple, are rich enough to express Turing machine computations [Savage, 1972; Borodin, 1977]. Exponential lower bounds have been obtained for the sizes of constant-depth circuits for parity [Yao, 1985; Hastad, 1987]. The study of circuit size may help determine whether P is equal to NP [Boppana and Sipser, 1990].

Concrete computational models have been used to study specific performance issues. The pebble game [Paterson and Hewitt, 1970] and the branching program [Borodin and Cook, 1982] models provide settings for the concrete study of space-time tradeoffs in problems such as graph connectivity and matrix multiplication [Abrahamson, 1990]. The red-blue pebble game [Hong and Kung, 1981] captures the essential features of a two-level memory hierarchy and exhibits lower bounds on tradeoffs between I/O time and primary storage capacity. The VLSI model [Thompson, 1979] takes wire width into account and facilitates the study of tradeoffs between the area of a chip and its computation time.

Research on the nature and limits of computation is still in its infancy. Many challenging questions remain to be answered, including questions about randomization, average-case complexity, and lower bounds on performance for hard combinatorial problems. With the emergence of new computing paradigms, such as quantum computing, biological computing, and speed-of-light limited computation, new computational models are needed that require new methods of analysis to understand their computational power. In the future, the full spectrum of research in complexity theory will continue to be important. History has shown that we cannot predict where breakthroughs will appear.

III.A.3. Fundamental Algorithms and Data Structures

A major activity of theoretical computer science is the design of efficient algorithms and data structures for discrete problems. Theoreticians have created and studied general techniques for designing algorithms, such as divide-and-conquer, prune-and-search, dynamic programming, and graph searching. Also, they have emphasized the importance of data representation and organization; for common abstract data types, they have devised efficient implementations such as balanced search trees and hash tables. [Cormen et al., 1990; Tarjan, 1987].

Central contributions to computer science have included the explicit introduction of the concept of an algorithm and measures for algorithmic efficiency, such as worst-case asymptotic time complexity. The adversarial point of view implicit in worst-case analysis circumvents the pitfalls of unjustified assumptions about "typical behavior." Asymptotic analysis explains how solution techniques scale with the input size. Further, in combination, the speedups of faster algorithms and faster hardware multiply.

Combinatorial algorithms have broad impact, because combinatorial problems arise everywhere. The breathtaking growth of communications networks has reemphasized the importance of graph and network problems. In particular, network flow problems are enjoying a renaissance, exemplified by the practical scaling technique for minimum-cost flow [Goldberg and Tarjan, 1990] and by efficient combinatorial algorithms for the multicommodity flow problem [Leighton et al., 1991]. Many improved algorithms rely on ingenious data structure techniques, some of which (e.g., universal hashing methods) have been incorporated into commercial software. Techniques for incrementally updating solutions and for maintaining persistent data are being successfully developed, and the benefit of amortization and self-adjusting structures has been demonstrated [Sleator and Tarjan, 1985]. Techniques in the design of parallel and distributed algorithms are strongly influencing practical parallel computing. In addition, and most remarkably, knowledge of how to coordinate tasks in parallel has been used to design faster sequential algorithms [Megiddo, 1983].

For NP-hard problems, approximation algorithms provide practical solutions. Significant contributions include the characterization of worst-case ratio and the formulation of polynomial-time and fully polynomial-time approximation schemes. Recent advances include the primal-dual method for approximation algorithms [Goemans and Williamson, 1995] and polynomial-time approximation schemes for a variety of problems in the Euclidean plane [Arora, 1996].

On-line algorithms adapt a known paradigm, approximation algorithms, to a new class of problems, namely, problems for which information is received incrementally. One major contribution is the notion of competitive analysis [Karlin et al., 1988], and a celebrated achievement is the design of a competitive k-server algorithm [Koutsoupias and Papadimitriou, 1994].

Future directions are suggested by areas in which substantial contributions continue to be made. These areas include, but are not limited to, combinatorial algorithms, data structures, parallel and distributed algorithms, randomized algorithms, approximation algorithms, and on-line algorithms. Furthermore, just as the emergence of parallel and distributed computing systems prompted the development of parallel and distributed algorithms, new algorithmic paradigms will be inspired by emerging kinds of computing environments, such as real-time embedded systems, wireless networks, and nomadic computing. New techniques will be needed to systematically handle computational problems with noisy input data and flexible optimization criteria. New analytical concepts will help explain the the effectiveness of commonly used but poorly understood computational heuristics.

III.B. Selected Bridging Opportunities

III.B.1. Reasons for Bridging Activities

As computation becomes a greater part of research in all science and engineering disciplines, theory of computing has an unprecedented opportunity to be an equal partner in this research. Collaborative projects that include researchers from both theory of computing and other disciplines, or other parts of computer science, build bridges between disciplines and benefit everyone: To recognize the importance of bridging activities, the ACM has recently established the Kanellakis Award, for theoretical contributions that have had an impact on practice.

Because of space limitations, the following sections present only a sample of bridging opportunities, to indicate what might be possible.


III.B.2. Security

As teleconferencing replaces face-to-face meetings, electronic mail replaces letters, electronic payment systems replace cash transactions, and on-line information services replace written reference materials, society gains in efficiency and flexibility, but it loses traditional methods of gaining confidence in the privacy, integrity, and authenticity of business and personal transactions. Theoretical computer science has helped supply methods that are needed to demonstrate that these new electronic services are secure. Modern computer security is built on the foundation of theoretical work on cryptography [Diffie and Hellman, 1976; Rivest et al., 1978; Yao, 1982].

Recent research on security has used all the tools of theoretical computer science: development of new models and definitions, design and analysis of algorithms, proofs of lower bounds and other inherent limitations on what can be achieved, and experimentation and implementation. (Stinson [1995] gives a thorough introduction to the field.) Ideas from computational complexity have been used to formally define concepts such as fairness, hardness, signature, information, knowledge, commitment, and proof that are appropriate for electronic transactions. Efficient algorithms and protocols have been developed that use these formal concepts to solve real security problems. For example, in "discreet decision making," many parties reach a joint decision without forcing any individual to reveal all of the private information that he used to make it. Electronic voting is a special case of discreet decision making.

Finally, cryptography and security present a unique opportunity for theoretical computer science: The ``customers'' of cryptography and security work explicitly want proofs and are willing to pay for them [Brickell, 1996]. Although security requirements (e.g., privacy or authenticity) can be specified just as functional requirements (e.g., throughput or speed) can be specified, it is more difficult to test that a product meets security requirements than to test that it meets functional requirements. Because a testing team cannot know all of the characteristics of a potential adversary, it is essentially impossible to perform realistic, comprehensive simulations of attempts to compromise security. The only way to develop confidence that a product or service is secure is to state all assumptions precisely and to prove that these assumptions, together with the functional requirements of the product or service, imply that the security requirements are met.

III.B.3. Information Retrieval

Over twenty years ago, research on information retrieval led to the development of the relational data model, and the subsequent commercialization of relational database systems. Supporting these developments, research in database theory used tools from logic, programming languages, and complexity theory to study the expressibility of query languages and the efficiency of algorithms for processing queries [Yannakakis, 1995]. Another report from the Strategic Directions workshop describes these developments and directions for future research on databases.

With the spectacular growth in volume of on-line data--on file systems, in databases, and on the Internet--and the precipitous drop in storage costs, new applications that were hitherto considered too cumbersome become feasible. As the emphasis in database research shifts from systems to applications, so too will the focus of theoretical research shift to new applications in information management and decision support.

Businesses believe that there is massive latent information in data archived from past operations. For instance, a long-distance phone company might examine its historical call data to infer calling patterns. Based on these patterns, the company might offer a custom-calling plan to a group of customers. This activity is popularly known as data mining [Fayyad et al., 1996]. (See The Data Mine.) Data mining computations involve statistical clustering, classification, prediction, data compression, and learning. These computations require new graph and set-theoretic algorithms, approximation algorithms, and randomization. The sheer quantity of data demands asymptotically efficient algorithms.

Another active area of research is on-line analytic processing. Here, data in a (typically sparsely populated) multidimensional cube, extracted from relational data, are aggregated along various dimensions, and in categories within dimensions. Results from the hardness of approximation have been used to show the optimality of a class of greedy aggregation algorithms [Harinarayan et al., 1996].

The traditional approach to information retrieval in textual databases requires a lexical match between a user's request and phrases in a database's documents. In contrast, latent semantic indexing uses spectral methods to find higher-order semantic associations between requests and terms in documents [Deerwester et al., 1990].

Searching a library of images is now as routine as searching through text files [Faloutsos et al., 1994]. IBM's Query By Image Content project and MIT's Photoshop are two advanced image indexing systems. A fortiori, multimedia indexing presents difficult algorithmic challenges. The creation and management of multimedia content provide new opportunities for research in geometric algorithms, data structures, pattern matching, coding, data compression, and storage management.

III.B.4. Communication Networks

The development of the "Information Superhighway" raises new computing issues related to communication networks. Because of the large scale of contemporary networking problems, asymptotically efficient algorithms are necessary. Important algorithmic questions arise both in the design of networks and in their effective utilization. In fact, it is not yet clear what the most appropriate models and performance measures are.

Supplying video on demand in ATM networks, for example, requires good routing algorithms to take full advantage of the network capacity. One can model this routing problem as a type of multicommodity flow problem, in which commodities arrive over time with different bandwidth demands. Algorithms for flow problems have long been studied within a theoretical context. Starting with Shahrokhi and Matula [1990], several papers have developed efficient algorithms for multicommodity flow problems (surveyed by Plotkin [1995]). The initial work in this area was not motivated by applications to networking, and it had little immediate practical relevance. Nonetheless, theoretical results inspired the recent development of an ATM routing protocol [Gawlick et al., 1994], which is being used commercially by AT&T. Research on routing will continue to be important, as communication hardware technology continues to evolve. Furthermore, theoretical research will not only respond to the new technology, but may potentially influence the hardware of the next generation of networks.

The design of communication networks also presents rich algorithmic questions that benefit from a theoretical treatment. Through the work of Mihail et al. [1996], theoretical investigations on performance guarantees of primal-dual approximation algorithms (surveyed by Goemans and Williamson [1997]) have led to a powerful commercial product used in the design of telephone networks at Bellcore. Good algorithms are known only for relatively simple types of network design applications, and there are particularly challenging problems in the effective design of ISDN networks and networks for multimedia traffic.

III.B.5. Statistical Physics

Theory of computing has recently contributed to statistical physics in three ways: the use of optimization algorithms for problems such as finding ground states of Ising systems and flux lines in superconductors [Barahona, 1985; Barahona et al., 1988; Middleton, 1995; Ogielski, 1986; Zeng et al., 1996], Monte Carlo simulation [Binder, 1986; Sokal, 1989], and threshold phenomena.

Optimization algorithms are used in simulations of physical systems where each simulation step is a minimization problem, with minima corresponding to low-energy states of the system. Combinatorial optimization problems relevant to these applications include the max-flow/min-cut problem, the assignment problem, and the min-cost flow problem. Efficient algorithms and codes developed by computer scientists [Goldberg, 1994] have been essential for these applications. Algorithmic efficiency is crucial because the optimization problems are large, and many problems need to be solved during a simulation.

A classic computational method for studying models in statistical mechanics is to simulate a suitable Markov chain and observe it in equilibrium. Heretofore, the algorithms used in these simulations have lacked rigorous analyses. Recently, however, powerful tools for analyzing mixing rates of Markov chains, originally used in theoretical computer science, have led to the first rigorous analyses of simulation algorithms for several important models [Jerrum and Sinclair, 1993; Kenyon et al., 1996].

Threshold phenomena, introduced in discrete mathematics [Erdos and Renyi, 1960], and studied in theoretical computer science, are closely related to phase transitions in statistical physics. This analogy has led to many exciting developments, such as a deeper understanding of the critical behavior of percolation models [Borgs et al., 1996] and insight into the properties of noisy circuits [Evans, 1996].

Conversely, applications in statistical physics raise interesting algorithmic problems. For example, current state-of-the-art codes for the min-cut and assignment problems are memory-limited. To overcome memory limitations, distributed algorithms which use the memories of many computers need to be developed.

III.B.6. Molecular Biology and Biochemistry

Molecular biology is directed toward understanding the information stored in living organisms and how it is processed. Such knowledge leads to advances in drug design, diagnosis and treatment of genetic diseases, and determining evolutionary relationships between organisms. Combinatorial algorithms are important tools in gaining this knowledge.

DNA Molecules and Proteins. DNA molecules are chains of chemical structures called nucleotides. Identifying and analyzing the sequences of nucleotides in the DNA molecules of humans (approximately 3 billion nucleotides in length) is the goal of the Human Genome Project. Sequencing machines today can handle DNA molecules of at most 1000 nucleotides. Therefore, biochemical processes are used to fragment large DNA molecules, clone the fragments, and obtain partial information about the sequences of nucleotides in these fragments.

After fragmentation, the task of obtaining useful data about the genome from this partial information is primarily computational. One example is the physical mapping problem, which is to determine the order in which "landmark" sequences appear in the original (unknown) DNA molecule, given partial information such as their presence in long, overlapping subsequences of the molecule [Karp, 1993]. Contributions of computer scientists to such tasks include NP-hardness results, approximation algorithms, development of new algorithms or heuristics, and new data structures [Pevzner and Waterman, 1995; Lander and Waterman, 1995].

As biotechnology changes, new computational problems arise. Combinatorial arrays of DNA molecules are now synthesized commercially by Affymetrix and others for use in new sequencing strategies (known as sequencing by hybridization) and disease diagnosis [Fodor et al., 1991]. Theoretical computer scientists are developing algorithmic strategies for the synthesis of such arrays [Pevzner and Lipshutz, 1994] and for using them in sequencing by hybridization [Margaritis and Skiena, 1995].

Proteins are chains of amino acids, of which there are 20 different types. In some proteins, the amino acid sequence folds to form an extremely compact three-dimensional structure, which is uniquely determined by the amino acid sequence. The protein folding problem is to predict this structure [Chan and Dill, 1993]. To date, computer scientists have proved NP-completeness results and designed approximation algorithms for restricted versions of the problem [Hart and Istrail, 1995], but the problem is far from solved.

DNA Computing. Adleman [1994] used a novel method to solve a tiny instance of the directed traveling salesman problem, encoding paths through cities in strands of DNA molecules. In principle, currently available chemical processes for synthesis and manipulation of DNA can support associative memory and general computations in which information-carrying DNA strands are processed in parallel [Baum, 1995; Lipton, 1995]. A primary thrust of current research is to determine the realizable scale of DNA-based computations, when errors are inherent in the underlying chemical processes. This research has raised new mathematical and algorithmic research problems on reliable computations despite errors [Karp et al., 1996].

Combinatorial Chemistry and Drug Design. The synthesis of molecules with special properties, such as enzymes and drugs, has traditionally been very expensive. A promising new approach called combinatorial chemistry is to create a library of molecules in vitro and to search this library for a molecule with the desired properties [Stemmer, 1995]. Optimization algorithms have been adapted to the problem of searching a large sequence space of proteins [Delagraph et al., 1993]. A strategy that involves successively fixing positions in a nucleotide sequence has recently yielded a nucleotide sequence that inhibits infection by the HIV virus in vitro [Wyatt et al., 1994].

IV. Infrastructure

IV.A. Funding

Research in computer science and engineering is funded because it supports the continued development of the information technology industry, which is of vital national interest. For basic research in computing, such as theory of computing, funding should be stable, to enable progress toward long-term goals. In contrast, funding policies that continually shift emphasis from one area to another, with funding levels that fluctuate substantially from year to year, discourage high quality long-term research.

Typical grants to researchers in theory of computing are smaller than in other areas of computer science and engineering, which may require special equipment, programmers, and technicians. Especially when research grants carry large overheads, some deans and department chairs improperly use the sizes of professors' research grants as the most important measure of the quality of their research. Hiring and promotion decisions should not be based primarily on money, but on scholarly contributions, evaluated through peer review.

IV.B. Education

Theory of computing is an essential component of educational curricula in computer science and engineering, because it provides unifying intellectual frameworks for the systematic design and construction of computer systems [Gotlieb, 1991]. All undergraduate and graduate students should learn why and how to write rigorous specifications, to create abstract models, to reason about correctness, and to analyze the performance of algorithms, software, and hardware. Students need to know data structures and algorithms, and to understand the implications of impossibility results, intractability results, and lower bounds.

Graduate students whose thesis research is in theory of computing need a deep and broad training in core theory to master a variety of techniques and to understand the connections between results in different areas of the field. They also need a broad education in other areas of computer science and engineering for three reasons: to understand the context of theoretical results, to be able to apply their expertise to practical problems, and to communicate effectively with future students, colleagues, and practitioners.

IV.C. Communication Between Researchers

To support bridging activities, such as the opportunities described in Section III.B, theoretical results need to be easily accessible to researchers outside theory of computing. Some survey papers should describe the contributions of theory of computing in the terminology and notation of the target application areas and should be published in journals typically read by researchers in those areas. Conversely, some papers in theoretical journals should explain issues and open problems arising from specific application areas and suggest promising new research directions that might benefit from the participation of researchers in theory of computing. These papers should describe open problems in a clear, accessible way, with well chosen references from the application area. Prizes could be awarded for expository articles and lectures.

Conferences also support bridging activities. Plenary talks by prominent researchers publicize particular research questions. Conferences devoted to specific subject areas that include a broad spectrum of researchers encourage dialogue and interaction. Tutorials for nonspecialists and panel discussions are also helpful.

Electronic resources supplement traditional publication in conferences and journals. Within the theory of computing community, researchers regularly use electronic bibliographies of conference and journal papers, such as the Glimpse server and the Hypertext Bibliography Project. The Networked Computer Science Technical Report Library is another resource. Researchers in all areas of computing could benefit from these kinds of tools.

V. Conclusions

Research in theory of computing has produced fundamental concepts and results of lasting value. In the future, theoretical research must proceed on a broad front: Clearly, theory of computing will continue to be a vital part of computing research.

Acknowledgments

We greatly appreciate the advice, suggestions, and help of Paul Beame, Matt Blaze, Allan Borodin, Andrei Broder, Jin-Yi Cai, Stephen Cook, Joan Feigenbaum, Vassos Hadzilacos, Juris Hartmanis, Steve Homer, David Karger, Mike Langston, Christos Papadimitriou, Charlie Rackoff, Satish Rao, Ken Regan, Anton Setzer, Alastair Sinclair, Cliff Stein, Torsten Suel, Roberto Tamassia, Bob Tarjan, Peter Wegner, Avi Wigderson, and the members of the Steering Committee of the IEEE Conference on Computational Complexity.

References

K. Abrahamson [1990]. A time-space tradeoff for boolean matrix multiplication. Proc. 31st Ann. Symp. on Foundations of Computer Science, pp. 412-419.

L. M. Adleman [1994]. Molecular computation of solutions to combinatorial problems. Science, vol. 266, pp. 1021-1024.

I. Adler, N. Karmarkar, M. Resende, and G. Veiga [1989]. An implementation of Karmarkar's algorithm for linear programming. Math. Prog., vol. 44, pp. 297-335.

S. Arora [1996]. Polynomial time approximation schemes for euclidean TSP and other geometric problems. To appear in Proc. 37th Ann. Symp. on Foundations of Computer Science, IEEE.

J. L. Balcazar, J. Diaz, and J. Gabarro [1988]. Structural Complexity I, Springer-Verlag, Berlin.

J. L. Balcazar, J. Diaz, and J. Gabarro [1990]. Structural Complexity II, Springer-Verlag, Berlin.

F. Barahona [1985]. Finding ground states in random-field Ising ferromagnets. J. Phys. A, vol. 18, pp. L673-L675.

F. Barahona, M. Grötschel, M. Jünger, and G. Reinelt [1988]. An application of combinatorial optimization to statistical physics and circuit layout design. Combinatorial Optimization, vol. 36, pp. 493-513.

W. Barr, B. Golden, J. Kelly, M. Resende, and W. Stewart [1995]. Designing and reporting on computational experiments with heuristic methods. J. Heuristics, vol. 1, no. 1, pp. 9-32.

E. Baum [1995]. How to build an associative memory vastly larger than the brain. Science, vol. 268, pp. 583-585.

K. Binder (ed.) [1996]. Monte Carlo methods in statistical physics, (2nd ed.). Springer-Verlag, Berlin.

R. B. Boppana, and M. Sipser [1990]. The complexity of finite functions, Handbook of Theoretical Computer Science, vol. A, Jan van Leeuwen, ed., Elsevier, Amsterdam, NY, Oxford, Tokyo; The MIT Press, Cambridge, MA, pp. 757-804.

C. Borgs, J. Chayes, H. Kesten, and J. Spencer [1996]. Birth of the infinite cluster: finite-size scaling in percolation. Preprint.

A. Borodin [1977]. On relating time and space to size and depth. SIAM J. Comput., vol. 6, no. 4, pp. 733-744.

A. Borodin and S. Cook [1982]. A time-space tradeoff for sorting on a general sequential model of computation. SIAM J. Comput., vol. 11, pp. 287-297.

D. P. Bovet and P. Crescenzi [1994]. Introduction to the Theory of Complexity, Prentice Hall International (UK) Limited, Hertfordshire, U.K.

E. Brickell [1996]. Banker's trust: cryptographic applications in electronic commerce. Crypto '96, Santa Barbara CA, August 1996.

H. S. Chan and K. A. Dill [1993]. The protein folding problem, Physics Today, pp. 24-32.

S. Cook [1971]. The complexity of theorem-proving procedures. In Proc. 3rd ACM Symp. Theory of Computing, pp. 151-158.

T. H. Cormen, C. E. Leiserson, R. L. Rivest [1990]. Introduction to Algorithms, MIT Press and McGraw-Hill.

S. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman [1990]. Indexing by latent semantic analysis. J. Society for Information Science, vol. 41, no. 6, pp. 391-407.

S. Delagrave, E. R. Goldman, and D. C. Youvan [1993]. Recursive ensemble mutagenesis, Protein Engineering, vol. 6, no. 3, pp. 327-331.

W. Diffie and M. Hellman [1976]. New directions in cryptography. IEEE Trans. Inform. Theory, vol. IT-22, no. 6, pp. 644-654.

P. Erdos and A. Renyi [1960]. On the evolution of random graphs. Magyar Tud. Akad. Mat. Kut. Int. Kozl, vol. 5, pp. 17-61.

W. Evans, C. Kenyon, Y. Peres, and L. Schulman [1996]. A critical phenomenon in a broadcast process. Preprint.

C. Faloutsos, R. Barber, M. Flickner, J. Hafner, W. Niblack, D. Petkovic, W. Equitz [1994]. Efficient and effective querying by image content. J. Intelligent Information Systems, vol. 3, no. 3/4, pp. 231-262.

U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth and R. Uthurusamy [1996]. Advances in Knowledge Discovery and Data Mining. AAAI Press.

U. Feige, S. Goldwasser, L. Lovasz, S. Safra, and M. Szegedy [1996]. Interactive proofs and the hardness of approximating cliques. J. ACM, vol. 43, pp. 268-292.

S. P. Fodor, J. L. Read, M. S. Pirrung, L. Stryer, A. T. Lu, and D. Solas [1991]. Light-Directed, Spatially Addressable Parallel Chemical Synthesis, Science, 251, 767-773.

M. Garey and D. Johnson [1979]. Computers and Intractability: a Guide to the Theory of NP-Completeness. W. H. Freeman, San Francisco.

R. Gawlick, A. Kamath, S. Plotkin, and K. Ramakrishnan [1994]. Routing and admission control of virtual circuits in general topology networks. Technical Report BL011212-940819-19TM, AT&T Bell Laboratories.

J. Gill [1977]. Computational complexity of probabilistic Turing machines. SIAM J. Comput., vol. 6, pp. 675-695.

M. X. Goemans and D. P. Williamson [1995]. A general approximation technique for constrained forest problems. SIAM J. Comput., vol. 24, pp. 296-317.

M. X. Goemans and D. P. Williamson [1997]. The primal-dual method for approximation algorithms and its application to network design problems. In Approximation algorithms, ed. D. S. Hochbaum, PWS, Boston.

A. V. Goldberg [1994]. Optimization algorithms for large networks. (Invited lecture). In Proc. 2nd Ann. European Symp. on Algorithms, pp. 1-9. Springer-Verlag.

A. V. Goldberg and R. E. Tarjan [1990]. Solving minimum cost flow problem by successive approximation. Math. Oper. Res., vol. 15, pp. 430-466.

C. Gotlieb [1991]. Fashions and fundamentals in computer science education. Education and Computing, vol. 7, pp. 97-103.

M. Goudreau, K. Lang, S. Rao, and T. Tsantilas [1995]. The green BSP library. Technical Report CR-TR-95-11, University of Central Florida.

Y. Gurevich [1991]. Average case completeness. J. Comput. System Sci., vol. 42, pp. 346-398.

A. Haken [1985]. The intractability of resolution. Theoret. Comput. Sci., 39:297-308.

V. Harinarayan, A. Rajaraman, J. D. Ullman [1996]. Implementing data cubes efficiently. ACM SIGMOD Conference, pp. 205-216

W. E. Hart and S. Istrail [1995]. Fast protein folding in the hydrophobic-hydrophilic model within three-eighths of optimal. Proc. 27th ACM Symp. on Theory of Computing, pp. 157-168.

J. T. Hastad [1987]. Computational Limitations for Small-Depth Circuits, MIT Press, Cambridge, Mass.

J.-W. Hong and H. T. Kung [1981]. I/O complexity: the red-blue pebble game. Proc. 13th Ann. ACM Symp. on Theory of Computing, pp. 326-333.

N. Immerman [1988]. Nondeterministic space is closed under complementation. SIAM J. Comput., vol. 17, no. 5, pp. 935-938.

R. Impagliazzo, L. Levin, and M. Luby [1989]. Pseudo-random generation from one-way functions. In Proc. 21st Ann. ACM Symp. on Theory of Computing, pp. 12-24.

M. Jerrum and A. Sinclair [1993]. Polynomial-time approximation algorithms for the Ising model, SIAM Journal on Computing, vol. 22, pp. 1087-1116.

D. S. Johnson and C. C. McGeoch [1993]. Network Flows and Matching: First DIMACS Implementation Challenge. AMS, Proceedings of the 1-st DIMACS Implementation Challenge.

A. R. Karlin, M. S. Manasse, L. Rudolph, and D. D. Sleator [1988]. Competitive snoopy caching. Algorithmica, vol. 3, no. 1, pp. 79-119.

N. Karmarkar [1984]. A new polynomial-time algorithm for linear programming. Combinatorica, vol. 4, pp. 373-395.

R. Karp [1972]. Reducibility among combinatorial problems. In Complexity of Computer Computations, pp. 85-104. Plenum Press, New York.

R. M. Karp [1993]. Mapping the genome: some combinatorial problems arising in molecular biology. Proc. 25th Ann. ACM Symp. on Theory of Computing, pp. 278-285.

R. M. Karp, C. Kenyon, and O. Waarts [1996]. Error-resilient DNA computation. Proc. 7th Ann. ACM-SIAM Symp. on Discrete Algorithms, pp. 458-467.

C. Kenyon, D. Randall, and A. Sinclair [1996]. Approximating the number of monomer-dimer coverings of a lattice. J. Statistical Physics, vol. 83, pp. 637-659.

E. Koutsoupias and C. Papadimitriou [1994]. On the k-server conjecture. Proc. 26th Ann. ACM Symp. on Theory of Computing, pp. 507-511.

E. S. Lander and M. S. Waterman, eds. [1995]. Calculating The Secrets Of Life: Contributions Of The Mathematical Sciences To Molecular Biology. Committee on the Mathematical Sciences in Genome and Protein Structure Research, National Research Council, National Academy Press, Washington, D. C.

T. Leighton, F. Makedon, S. Plotkin, C. Stein, E. Tardos, and S. Tragoudas [1991]. Fast approximation algorithms for multicommodity flow problems, Proc. 23rd Ann. ACM Symp. on Theory of Computing, pp. 101-111.

L. Levin [1986]. Average case complete problems. SIAM J. of Comput., vol. 15, pp. 285-286.

R. J. Lipton [1995]. DNA solution of hard computational problems. Science, vol. 268, pp. 542-545.

M. C. Loui [1991]. Theory of computing: achievements, challenges, and opportunities. ACM SIGACT News, vol. 22, no. 3, pp. 41-48.

C. Lund and M. Yannakakis [1994]. On the hardness of approximating minimization problems. J. ACM, vol. 41, no. 5, pp. 960-981.

D. Margaritis and S. S. Skiena [1995]. Reconstructing strings from substrings in rounds. Proc. 36th Ann. Symp. on Foundations of Computer Science, IEEE, pp. 613-620.

C. McGeoch [1996]. Towards an experimental method for algorithm simulation. INFORMS J. Computing, vol. 8, no. 1, pp. 1-15.

N. Megiddo [1983]. Applying parallel computation algorithms in the design of serial algorithms. J. ACM, vol. 30, pp. 852-865.

A. A. Middleton [1995]. Numerical result for the ground-state interface in random medium. Phys. Rev. E, vol. 52, pp. R3337-R3340.

M. Mihail, D. Shallcross, N. Dean, and M. Mostrel [1996]. A commercial application of survivable network design: ITP/INPLANS CCS network topology analyzer. Proc. 7th Ann. ACM-SIAM Symp. on Discrete Algorithms, pp. 279-287.

R. Miller [1993]. A library for bulk-synchronous parallel programming. In Proc. of the British Comp. Soc. Par. Proc. Specialist Group Workshop on General Purpose Parallel Computing.

R. Motwani and P. Raghavan [1995]. Randomized Algorithms. Cambridge University Press, Cambridge, UK.

S. Näher and C. Uhrig [1996]. LEDA User Manual, Version R-3.3. Available via URL ftp://ftp.mpi-sb.mpg.de/pub/LEDA/MANUAL-R-3.3.ps.

A. T. Ogielski [1986]. Integer optimization and zero-temperature fixed point in Ising random field systems. Physical Review Lett., vol. 57, pp. 1251-1254

C. H. Papadimitriou [1994]. Computational Complexity, Addison-Wesley, Reading, Mass.

M. S. Paterson and C. E. Hewitt [1970]. Comparative schematology, Proc. Proj. MAC Conf. on Concurrent Systems and Parallel Computation, Woods Hole, MA, pp. 119-127.

M. P. Patterson and T. Przytycka [1996]. On the complexity of string folding. To appear in Discrete Applied Math.

P. A. Pevzner and R. Lipshutz [1994]. Towards DNA sequencing chips. Proc. 19th Intl. Conf. on Mathematical Foundations of Computer Science, Lecture Notes in Computer Science, vol. 841, Springer-Verlag, pp. 143-158.

P. A. Pevzner and M. S. Waterman [1995]. Open combinatorial problems in computational molecular biology. Proc. 3rd Israeli Symp. on Theoretical Computing and Systems.

S. Plotkin [1995]. Competitive routing of virtual circuits in ATM networks. IEEE J. Selected Areas in Comm., Special issue on Advances in the Fundamentals of Networking, pp. 1128-1136.

M. Rabin [1976]. Probabilistic algorithms. In J. Traub, editor, Algorithms and Complexity: New Directions and Recent Results, pp. 21-39. Academic Press, New York.

R. Rivest, A. Shamir and L. Adleman [1978]. A method for obtaining digital signatures and public key cryptosystems. Commun. ACM, vol. 21, no. 2, pp. 120-126.

J. E. Savage [1972]. Computational work and time on finite machines. J. ACM, vol. 19, no. 4, pp. 660-674.

F. Shahrokhi and D. W. Matula [1990]. The maximum concurrent flow problem. J. ACM, 37:318-334.

D. D. Sleator and R. E. Tarjan [1985]. Self-adjusting binary search trees. J. ACM, vol. 32, pp. 652-686.

A. D. Sokal [1989]. Monte Carlo methods in statistical mechanics: foundations and new algorithms, Troisieme Cycle de la Physique en Suisse Romande.

W. P. Stemmer [1995]. Searching sequence space. Bio/Technology, vol. 13, pp. 549-553.

D. Stinson [1995]. Cryptography: Theory and Practice. CRC Press, Boca Raton, Fla.

R. Szelepcsenyi [1988]. The method of forced enumeration for nondeterministic automata. Acta Inform., vol. 26, no. 3, pp. 279-284.

R. E. Tarjan. Algorithm design [1987]. Commun. ACM, vol. 30, pp. 205-212.

C. D. Thompson [1979]. Area-time complexity for VLSI. Proc. 11th ACM. Symp. on Theory of Computing, pp. 81-88.

L. G. Valiant [1990]. A bridging model for parallel computation. Commun. ACM, vol. 33, pp. 103-111.

J. Wang [to appear]. Average-case computational complexity theory. In L. Hemaspaandra and A. Selman, editors, Complexity Theory Retrospective II. Springer-Verlag. To appear.

J. R. Wyatt, T. A. Vickers, J. L. Roberson, R. W. Buckheit Jr., T. Klimkait, E. DeBaets, P. W. Davis, B. Rayner, J. L. Imbach, and D. J. Ecker [1994]. Combinatorially selected guanosine-quartet structure is a potent inhibitor of human immunodeficiency virus envelope-mediated cell fusion, Proc. Nat. Acad. Sci., USA, vol. 91, pp. 1356-1360.

M. Yannakakis [1995]. Perspectives on database theory. Proc. 36th Ann. Symp. on Foundations of Computer Science, IEEE, pp. 224-246. A. Yao [1982]. Theory and applications of trapdoor functions. Proc. 23rd Ann. Symp. on Foundations of Computer Science, IEEE, pp. 80-91.

A. C. C. Yao [1985]. Separating the polynomial-time hierarchy by oracles, Proc. 26th Ann. Symp. on Foundations of Computer Science, pp. 1-10, IEEE.

C. Zeng, A. A. Middleton, and Y. Shapir [1996]. Ground-state roughness of the disordered substrate and flux lines in d=2. Submitted for publication.


Return to Group's Home Page


This page maintained by:
Michael C. Loui / m-loui@uiuc.edu
Last modified on September 23, 1996