Randomized algorithm
Encyclopedia
A randomized algorithm is an algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

 which employs a degree of randomness
Randomness
Randomness has somewhat differing meanings as used in various fields. It also has common meanings which are connected to the notion of predictability of events....

 as part of its logic. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random bits. Formally, the algorithm's performance will be a random variable
Random variable
In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable...

 determined by the random bits; thus either the running time, or the output (or both) are random variables.

One has to distinguish between algorithms that use the random input to reduce the expected running time or memory usage, but always terminate with a correct result in a bounded amount of time, and probabilistic algorithms, which, depending on the random input, have a chance of producing an incorrect result (Monte Carlo algorithm
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose running time is deterministic, but whose output may be incorrect with a certain probability....

s) or fail to produce a result (Las Vegas algorithm
Las Vegas algorithm
In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. In other words, a Las Vegas algorithm does not gamble with the verity of the result; it gambles only with the resources...

s) either by signalling a failure or failing to terminate.

In the second case, random performance and random output, the term "algorithm" for a procedure is somewhat questionable. In the case of random output, it is no longer formally effective
Effective method
In computability theory, an effective method is a procedure that reduces the solution of some class of problems to a series of rote steps which, if followed to the letter, and as far as may be necessary, is bound to:...

.
However, in some cases, probabilistic algorithms are the only practical means of solving a problem.

In common practice, randomized algorithms are approximated using a pseudorandom number generator
Pseudorandom number generator
A pseudorandom number generator , also known as a deterministic random bit generator , is an algorithm for generating a sequence of numbers that approximates the properties of random numbers...

 in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior.

Motivation

As a motivating example, consider the problem of finding an ' a ' in an array of n elements.

Input: An array of n elements, in which half are ' a 's and the other half are ' b 's.

Output: Find an ' a ' in the array.

We give two versions of the algorithm, one Las Vegas algorithm
Las Vegas algorithm
In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. In other words, a Las Vegas algorithm does not gamble with the verity of the result; it gambles only with the resources...

 and one Monte Carlo algorithm
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose running time is deterministic, but whose output may be incorrect with a certain probability....

.

Las Vegas algorithm:


findingA_LV(array A, n)
begin
repeat
Randomly select one element out of n elements.
until 'a' is found
end


This algorithm succeeds with probability 1. The running time is random (and arbitrarily large) but its expectation is upper-bounded by . (See Big O notation
Big O notation
In mathematics, big O notation is used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. It is a member of a larger family of notations that is called Landau notation, Bachmann-Landau notation, or...

)

Monte Carlo algorithm:

findingA_MC(array A, n, k)
begin
i=1
repeat
Randomly select one element out of n elements.
i = i + 1
until i=k
end

If an ' a ' is found, the algorithm succeeds, else the algorithm fails. After k times execution, the probability of finding an ' a ' is:
This algorithm does not guarantee success, but the run time is fixed. The selection is executed exactly k times, therefore the runtime is .

Randomized algorithms are particularly useful when faced with a malicious "adversary" or attacker
Attacker
In some sports, an attacker is a specific type of player, usually one whose role involves aggressive play. Heavy attackers are usually placed up front so they can score some points for the team.In football, attackers are also referred to as strikers....

 who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and competitive analysis (online algorithm)
Competitive analysis (online algorithm)
Competitive analysis is a method invented for analyzing online algorithms, in which the performance of an online algorithm is compared to the performance of an optimal offline algorithm that can view the sequence of requests in advance...

) such as in the Prisoner's dilemma
Prisoner's dilemma
The prisoner’s dilemma is a canonical example of a game, analyzed in game theory that shows why two individuals might not cooperate, even if it appears that it is in their best interest to do so. It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W...

. It is for this reason that randomness
Randomness
Randomness has somewhat differing meanings as used in various fields. It also has common meanings which are connected to the notion of predictability of events....

 is ubiquitous in cryptography
Cryptography
Cryptography is the practice and study of techniques for secure communication in the presence of third parties...

. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore either a source of truly random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing
Quantum computer
A quantum computer is a device for computation that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from traditional computers based on transistors...

.

In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm(related to the Monte Carlo method
Monte Carlo method
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems...

 for simulation) completes in a fixed amount of time (as a function of the input size), but allow a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via Markov's inequality
Markov's inequality
In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant...

), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.

Computational complexity

Computational complexity theory
Computational complexity theory
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other...

 models randomized algorithms as probabilistic
Probabilistic Turing machine
In computability theory, a probabilistic Turing machine is a non-deterministic Turing machine which randomly chooses between the available transitions at each point according to some probability distribution....

 Turing machine
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...

s
. Both Las Vegas
Las Vegas algorithm
In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. In other words, a Las Vegas algorithm does not gamble with the verity of the result; it gambles only with the resources...

 and Monte Carlo algorithm
Monte Carlo algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose running time is deterministic, but whose output may be incorrect with a certain probability....

s are considered, and several complexity class
Complexity class
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:...

es are studied. The most basic randomized complexity class is RP
RP (complexity)
In complexity theory, RP is the complexity class of problems for which a probabilistic Turing machine exists with these properties:* It always runs in polynomial time in the input size...

, which is the class of decision problem
Decision problem
In computability theory and computational complexity theory, a decision problem is a question in some formal system with a yes-or-no answer, depending on the values of some input parameters. For example, the problem "given two numbers x and y, does x evenly divide y?" is a decision problem...

s for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP.

The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME, is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.Cobham's thesis holds...

, i.e. BPP represents the class of efficient randomized algorithms.

History

Historically, the first randomized algorithm was a method developed by Michael O. Rabin
Michael O. Rabin
Michael Oser Rabin , is an Israeli computer scientist and a recipient of the Turing Award.- Biography :Rabin was born in 1931 in Breslau, Germany, , the son of a rabbi. In 1935, he emigrated with his family to Mandate Palestine...

 for the closest pair problem
Closest pair of points problem
The closest pair of points problem or closest pair problem is a problem of computational geometry: given n points in metric space, find a pair of points with the smallest distance between them...

 in computational geometry
Computational geometry
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational...

. The study of randomized algorithms was spurred by the 1977 discovery of a randomized primality test
Solovay-Strassen primality test
The Solovay–Strassen primality test, developed by Robert M. Solovay and Volker Strassen, is a probabilistic test to determine if a number is composite or probably prime...

 (i.e., determining the primality
Primality test
A primality test is an algorithm for determining whether an input number is prime. Amongst other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not...

 of a number) by Robert M. Solovay
Robert M. Solovay
Robert Martin Solovay is an American mathematician specializing in set theory.Solovay earned his Ph.D. from the University of Chicago in 1964 under the direction of Saunders Mac Lane, with a dissertation on A Functorial Form of the Differentiable Riemann–Roch theorem...

 and Volker Strassen
Volker Strassen
Volker Strassen is a German mathematician, a professor emeritus in the department of mathematics and statistics at the University of Konstanz.-Biography:Strassen was born on April 29, 1936, in Düsseldorf-Gerresheim....

. Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test
Miller-Rabin primality test
The Miller–Rabin primality test or Rabin–Miller primality test is a primality test: an algorithmwhich determines whether a given number is prime,...

 can be turned into a randomized algorithm. At that time, no practical deterministic algorithm
Deterministic algorithm
In computer science, a deterministic algorithm is an algorithm which, in informal terms, behaves predictably. Given a particular input, it will always produce the same output, and the underlying machine will always pass through the same sequence of states...

 for primality was known.

The Miller-Rabin primality test relies on a binary relation between two positive integers k and n that can be expressed by saying that k "is a witness to the compositeness of" n. It can be shown that
  • If there is a witness to the compositeness of n, then n is composite (i.e., n is not prime
    Prime number
    A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is called a composite number. For example 5 is prime, as only 1 and 5 divide it, whereas 6 is composite, since it has the divisors 2...

    ), and
  • If n is composite then at least three-fourths of the natural numbers less than n are witnesses to its compositeness, and
  • There is a fast algorithm that, given k and n, ascertains whether k is a witness to the compositeness of n.

Observe that this implies that the primality problem is in Co-RP
RP (complexity)
In complexity theory, RP is the complexity class of problems for which a probabilistic Turing machine exists with these properties:* It always runs in polynomial time in the input size...

.

If one randomly chooses 100 numbers less than a composite number n, then the probability of failing to find such a "witness" is (1/4)100 so that for most practical purposes, this is a good primality test. If n is big, there may be no other test that is practical. The probability of error can be reduced to an arbitrary degree by performing enough independent tests.

Therefore, in practice, there is no penalty associated with accepting a small probability of error, since with a little care the probability of error can be made astronomically small. Indeed, even though a deterministic polynomial-time primality test has since been found (see AKS primality test
AKS primality test
The AKS primality test is a deterministic primality-proving algorithm created and published by three Indian Institute of Technology Kanpur computer scientists, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, on August 6, 2002, in a paper titled "PRIMES is in P"...

), it has not replaced the older probabilistic tests in cryptographic
Cryptography
Cryptography is the practice and study of techniques for secure communication in the presence of third parties...

 software
Computer software
Computer software, or just software, is a collection of computer programs and related data that provide the instructions for telling a computer what to do and how to do it....

 nor is it expected to do so for the foreseeable future.

Quicksort

Quicksort is a familiar, commonly-used algorithm in which randomness can be useful. Any deterministic version of this algorithm requires O
Big O notation
In mathematics, big O notation is used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. It is a member of a larger family of notations that is called Landau notation, Bachmann-Landau notation, or...

(n2)
time to sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in O(n log n) time regardless of the characteristics of the input.

Randomized incremental constructions in geometry

In computational geometry
Computational geometry
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational...

, a standard technique to build a structure like a convex hull
Convex hull
In mathematics, the convex hull or convex envelope for a set of points X in a real vector space V is the minimal convex set containing X....

 or Delaunay Triangulation
Delaunay triangulation
In mathematics and computational geometry, a Delaunay triangulation for a set P of points in a plane is a triangulation DT such that no point in P is inside the circumcircle of any triangle in DT. Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the...

 is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be upper bounded. This technique is known as randomized incremental construction.

Verifying matrix multiplication

Input: Matrix ARm × p, BRp × n, and CRm × n.

Output: True if C = A · B; false if CA · B

We give a Monte Carlo algorithm to solve the problem.
begin
i=1
repeat
Choose r=(r1,...,rn) ∈ {0,1}n at random.
Compute C · r and A · (B · r)
if C · r ≠ A · (B · r)
return FALSE
endif
i = i + 1
until i=k
return TRUE
end
The running time of the algorithm is .

Theorem: The algorithm is correct with probability at least .

We will prove that if then .

If , by definition we have . Without loss of generality,
we assume that .

On the other hand, .

If , then the first entry of is 0, that is
Since , we can solve for :
If we fix all except , the equality holds for at most one of the two choices for . Therefore,.

We run the loop for k times. If , the algorithm is always correct; if , the probability of getting the correct answer is at least .

Min cut

Input: A graph
Graph theory
In mathematics and computer science, graph theory is the study of graphs, mathematical structures used to model pairwise relations between objects from a certain collection. A "graph" in this context refers to a collection of vertices or 'nodes' and a collection of edges that connect pairs of...

 G(V,E)

Output: A cut
Cut (graph theory)
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. The cut-set of the cut is the set of edges whose end points are in different subsets of the partition. Edges are said to be crossing the cut if they are in its cut-set.In an unweighted undirected graph, the...

 partitioning the vertices into L and R, with the minimum number of edges between L and R.

Recall that the contraction
Edge contraction
In graph theory, an edge contraction is an operation which removes an edge from a graph while simultaneously merging together the two vertices it previously connected. Edge contraction is a fundamental operation in the theory of graph minors...

 of two nodes, u and v, in a (multi-)graph yields a new node u ' with edges that are the union of the edges incident on either u or v, except from any edge(s) connecting u and v. Figure 1 gives an example of contraction of vertex A and B.
After contraction, the resulting graph may have parallel edges, but contains no self loops.
Karger's basic algorithm:
begin
i=1
repeat
repeat
Take a random edge (u,v)∈ E in G
replace u and v with the contraction u'
until only 2 nodes remain
obtain the corresponding cut result Ci
i=i+1
until i=m
output the minimum cut among C1,C2,...,Cm.
end
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is , and n denotes the number of vertices.
After m times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an
example of one execution of the algorithm. After execution, we get a cut of size 1.
Lemma 1: Let k be the min cut size, and let C={e1,e2,...,ek} be the min cut. If, during iteration i, no edge eC is selected for contraction, then Ci=C.

Proof: If G is not connected, then G can be partitioned into L and R without any edge between them. So the min cut in a disconnected graph is 0. Now, assume G is connected. Let V=LR be the partition of V induced by C : C={ {u,v} ∈ E : uL,vR } (well-defined since G is connected). Consider an edge {u,v} of C. Initially, u,v are distinct vertices. As long as we pick an edge f ≠ e, u and v do not get merged. Thus, at the end of the algorithm, we have two compound nodes covering the entire graph, one consisting of the vertices of L and the other consisting of the vertices of R. As in figure 2, the size of min cut is 1, and C={(A,B)}. If we don't select (A,B) for contraction, we can get the min cut.

Lemma 2: If G is a multigraph with p vertices and whose min cut has size k, then G has at least pk/2 edges.

Proof:
Because the min cut is k, every vertex v must satisfy degree(v)k. Therefore, the sum of the degree is at least pk. But it is well known that the sum of vertex degrees equals 2|E|. The lemma follows.

Analysis of algorithm

The probability that the algorithm succeeds is 1- the probability that all attempts fail. By independence, the probability that all attempts fail is

By lemma 1, the probability that Ci=C is the probability that no edge of C is selected during iteration i. Consider the inner loop and let Gj denote the graph after j edge contractions, where j∈{0,1,...,n-3}. Gj has n-j vertices. We use the chain rule of conditional possibilities
Conditional probability
In probability theory, the "conditional probability of A given B" is the probability of A if B is known to occur. It is commonly notated P, and sometimes P_B. P can be visualised as the probability of event A when the sample space is restricted to event B...

.
The probability that the edge chosen at iteration j is not in C, given that no edge of C has been chosen before, is . Note that Gj still has min cut of size k, so by Lemma 2, it still has at least edges.

Thus, .

So by the chain rule, the probability of finding the min cut C is


Cancellation gives . Thus the probability that the algorithm succeeds is at least . For , this is equivalent to . The algorithm finds the min cut with probability , in time .

Derandomization

Randomness can be viewed as a resource, like space and time. Derandomization is then the process of removing randomness (or using as little of it as possible). From the viewpoint of computational complexity
Computational Complexity
Computational Complexity may refer to:*Computational complexity theory*Computational Complexity...

, derandomizing an efficient randomized algorithm is the question, is P
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME, is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.Cobham's thesis holds...

 = BPP ?

There are also specific methods that can be employed to derandomize particular randomized algorithms:
  • the method of conditional probabilities
    Method of conditional probabilities
    In mathematics and computer science, the probabilistic method is used to prove the existence of mathematical objects with desired combinatorial properties. The proofs are probabilistic — they work by showing that a random object, chosen from some probability distribution, has the desired properties...

    , and its generalization, pessimistic estimators
  • discrepancy theory
    Discrepancy theory
    In mathematics, discrepancy theory describes the deviation of a situation from the state one would like it to be. It is also called theory of irregularities of distribution. This refers to the theme of classical discrepancy theory, namely distributing points in some space such that they are evenly...

     (which is used to derandomize geometric algorithms)
  • the exploitation of limited independence in the random variables used by the algorithm, such as the pairwise independence
    Pairwise independence
    In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent...

     used in universal hashing
    Universal hashing
    Using universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property . This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary...

  • the use of expander graph
    Expander graph
    In combinatorics, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion as described below...

    s (or disperser
    Disperser
    A disperser is a one-sided extractor. Where an extractor requires that every event gets the same probability under the uniform distribution and the extracted distribution, only the latter is required for a disperser...

    s in general) to amplify a limited amount of initial randomness (this last approach is also referred to as generating pseudorandom bits from a random source, and leads to the related topic of pseudorandomness)

Where randomness helps

When the model of computation is restricted to Turing machine
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...

s, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
  • Based on the initial motivating example: given an exponentially-long string of 2k characters, half a's and half b's, a random access machine
    Random access machine
    In computer science, random access machine is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers...

     requires at least 2k-1 lookups in the worst-case to find the index of an a; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
  • In communication complexity
    Communication complexity
    The notion of communication complexity was introduced by Yao in 1979,who investigated the following problem involving two separated parties . Alice receives an n-bit string x and Bob another n-bit string y, and the goal is for one of them to compute a certain function f with the least amount of...

    , the equality of two strings can be verified using bits of communication with a randomized protocol. Any deterministic protocol requires bits.
  • The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time. Bárány
    Imre Bárány
    Imre Bárány is a Hungarian mathematician, working in combinatorics and discrete geometry. He works at the Rényi Mathematical Institute of the Hungarian Academy of Sciences, and has a part-time job at the University College London....

     and Füredi
    Zoltán Füredi
    Zoltán Füredi is a Hungarian mathematician, working in combinatorics, mainly in discrete geometry and extremal combinatorics. He was a student of Gyula O. H. Katona. He is a corresponding member of the Hungarian Academy of Sciences...

     showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions.
  • A more complexity-theoretic example of a place where randomness appears to help is the class IP
    IP (complexity)
    In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. The concept of an interactive proof system was first introduced by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985...

    . IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = PSPACE
    PSPACE
    In computational complexity theory, PSPACE is the set of all decision problems which can be solved by a Turing machine using a polynomial amount of space.- Formal definition :...

    . However, if it is required that the verifier be deterministic, then IP = NP
    NP (complexity)
    In computational complexity theory, NP is one of the most fundamental complexity classes.The abbreviation NP refers to "nondeterministic polynomial time."...

    .
  • In a Chemical Reaction Network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random CRN is used. With a simple nondeterministic CRN (any possible reaction can happen next), the computational power is limited to primitive recursive functions.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK