Cobham's Thesis
Encyclopedia
Cobham's thesis, also known as Cobham–Edmonds thesis (named after Alan Cobham and Jack Edmonds
Jack Edmonds
Jack R. Edmonds is a mathematician, regarded as one of the most important contributors to the field of combinatorial optimization...

), asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that is, if they lie in the complexity class
Complexity class
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:...

 P
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME, is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.Cobham's thesis holds...

.

Formally, to say that a problem can be solved in polynomial time is to say that there exists an algorithm that, given an n-bit instance of the problem as input, can produce a solution in time O(nc), where c is a constant that depends on the problem but not the particular instance of the problem.

Alan Cobham's 1965 paper entitled "The intrinsic computational difficulty of functions" is one of the earliest mentions of the concept of the complexity class
Complexity class
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:...

 P
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME, is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.Cobham's thesis holds...

, consisting of problems decidable in polynomial time. Cobham theorized that this complexity class was a good way to describe the set of feasibly computable
Computability theory (computer science)
Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science...

 problems. Any problem that cannot be contained in P is not feasible, but if a real-world problem can be solved by an algorithm existing in P, generally such an algorithm will eventually be discovered.

The class P is a useful object of study because it is not sensitive to the details of the model of computation: for example, a change from a single-tape Turing machine
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...

 to a multi-tape machine can lead to a quadratic speedup, but any algorithm that runs in polynomial time under one model also does so on the other.

Reasoning

The thesis is widely considered to be a good rule of thumb for real-life problems. Typical input lengths that users and programmers are interested in are between 100 and 1,000,000, approximately. Consider an input length of n=100 and a polynomial algorithm whose running time is n2. This is a typical running time for a polynomial algorithm. (See the "Objections" section for a discussion of atypical running times.) The number of steps that it will require, for n=100, is 1002=10000. A typical CPU will be able to do approximately 109 operations per second (this is extremely simplified). So this algorithm will finish on the order of (10000 ÷109) = .00001 seconds. A running time of .00001 seconds is reasonable, and that's why this is called a practical algorithm. The same algorithm with an input length of 1,000,000 will take on the order of 17 minutes, which is also a reasonable time for most (non-real-time
Real-time computing
In computer science, real-time computing , or reactive computing, is the study of hardware and software systems that are subject to a "real-time constraint"— e.g. operational deadlines from event to system response. Real-time programs must guarantee response within strict time constraints...

) applications.

Meanwhile, an algorithm that runs in exponential time might have a running time of 2n. The number of operations that it will require, for n=100, is 2100. It will take (2100 ÷ 109) ≈ 1.3×1021 seconds, which is (1.3×1021 ÷ 31556926) ≈ 4.1×1013 years. The largest problem this algorithm could solve in a day would have n=46, which seems very small.

Objections

There are many lines of objection to Cobham's thesis. The thesis essentially states that "P" means "easy, fast, and practical," while "not in P" means "hard, slow, and impractical." But this is not always true. To begin with, it abstracts away some important variables that influence the runtime in practice:
  • It ignores constant factors and lower-order terms.
  • It ignores the size of the exponent. The time hierarchy theorem
    Time hierarchy theorem
    In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems...

     proves the existence of problems in P requiring arbitrarily large exponents.
  • It ignores the typical size of the input.


All three are related, and are general complaints about analysis of algorithms
Analysis of algorithms
To analyze an algorithm is to determine the amount of resources necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length...

, but they particularly apply to Cobham's thesis since it makes an explicit claim about practicality. Under Cobham's thesis, we are to call a problem for which the best algorithm takes 10100n instructions feasible, and a problem with an algorithm that takes 20.00001 n infeasible—even though we could never solve an instance of size n=1 with the former algorithm, whereas we could solve an instance of the latter problem of size n=106 without difficulty. As we saw, it takes a day on a typical modern machine to process 2n operations when n=46; this may be the size of inputs we have, and the amount of time we have to solve a typical problem, making the 2n-time algorithm feasible in practice on the inputs we have. Conversely, in fields where practical problems have millions of variables (such as Operations Research
Operations research
Operations research is an interdisciplinary mathematical science that focuses on the effective use of technology by organizations...

 or Electronic Design Automation
Electronic design automation
Electronic design automation is a category of software tools for designing electronic systems such as printed circuit boards and integrated circuits...

), even O(n3) algorithms are often impractical.

Cobham's thesis also ignores other models of computation. A problem that requires taking exponential time to find the exact solution might allow for a fast approximation algorithm
Approximation algorithm
In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. Approximation algorithms are often associated with NP-hard problems; since it is unlikely that there can ever be efficient polynomial time exact...

 that returns a solution that is almost correct. Allowing the algorithm to make random choices, or to sometimes make mistakes, might allow an algorithm to run in polynomial time rather than exponential time. Though this is currently believed to be unlikely (see RP
RP (complexity)
In complexity theory, RP is the complexity class of problems for which a probabilistic Turing machine exists with these properties:* It always runs in polynomial time in the input size...

, BPP), in practice, randomized algorithms are often the fastest algorithms available for a problem (quicksort, for example, or the Miller–Rabin primality test). Finally, quantum computers are able to solve in polynomial time some problems that have no known polynomial time algorithm on current computers, such as Shor's algorithm
Shor's algorithm
Shor's algorithm, named after mathematician Peter Shor, is a quantum algorithm for integer factorization formulated in 1994...

 for integer factorization
Integer factorization
In number theory, integer factorization or prime factorization is the decomposition of a composite number into smaller non-trivial divisors, which when multiplied together equal the original integer....

, but this is not currently a practical concern since large-scale quantum computers are not yet available.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK