Random Self-reducibility
Encyclopedia
Random self-reducibility (RSR) is the rule that a good algorithm for the average case implies a good algorithm for the worst case. RSR is the ability to solve all instances of a problem by solving a large fraction of the instances.

Definition

If a function f evaluating any instance x can be reduced in polynomial time to the evaluation of f on one or more random instances yi, then it is self-reducible (this is also known as a non-adaptive uniform self-reduction). In a random self-reduction, an arbitrary worst-case instance x in the domain of f is mapped to a random set of instances y1, ..., yk. This is done so that f(x) can be computed in polynomial time, given the coin-toss sequence from the mapping, x, and f(y1), ..., f(yk). Therefore, taking the average with respect to the induced distribution on yi, the average-case complexity
Average-case complexity
Average-case complexity is a subfield of computational complexity theory that studies the complexity of algorithms on random inputs.The study of average-case complexity has applications in the theory of cryptography....

 of f is the same (within polynomial factors) as the worst-case randomized complexity of f.

One special case of note is when each random instance yi is distributed uniformly over the entire set of elements in the domain of f that have a length of |x|. In this case f is as hard on average as it is in the worst case. This approach contains two key restrictions. First the generation of y1, ..., yk is performed non-adaptively. This means that y2 is picked before f(y1) is known. Second, it is not necessary that the points y1, ..., yk be uniformly distributed.

Application in cryptographic protocols

Problems that require some privacy in the data (typically cryptographic problems) can use randomization to ensure that privacy. In fact, the only provably secure cryptographic system (the one-time pad
One-time pad
In cryptography, the one-time pad is a type of encryption, which has been proven to be impossible to crack if used correctly. Each bit or character from the plaintext is encrypted by a modular addition with a bit or character from a secret random key of the same length as the plaintext, resulting...

) has its security relying totally on the randomness of the key data supplied to the system.

The field of cryptography
Cryptography
Cryptography is the practice and study of techniques for secure communication in the presence of third parties...

 utilizes the fact that certain number-theoretic functions are randomly self-reducible. This includes probabilistic encryption
Probabilistic encryption
Probabilistic encryption is the use of randomness in an encryption algorithm, so that when encrypting the same message several times it will, in general, yield different ciphertexts...

 and cryptographically strong pseudorandom number generation
Pseudorandom number generator
A pseudorandom number generator , also known as a deterministic random bit generator , is an algorithm for generating a sequence of numbers that approximates the properties of random numbers...

. Also, instance-hiding schemes (where a weak private device uses a strong public device without revealing its data) are easily exemplified by random self-reductions.

Examples

  • Discrete logarithm
    Discrete logarithm
    In mathematics, specifically in abstract algebra and its applications, discrete logarithms are group-theoretic analogues of ordinary logarithms. In particular, an ordinary logarithm loga is a solution of the equation ax = b over the real or complex numbers...


Theorem: If a deterministic polynomial time algorithm A computes the discrete logarithm for a 1/poly(n) fraction of input y (n = |p|), then there is a randomized polynomial time algorithm for discrete logarithm for all inputs.

Given a generator g, and an x ∈ a cyclic group G, the discrete log of x to the base g is an integer k ({0 ≤ k < |G|}) and x = gk. Since the order of group G is known, and B is distributed uniformly on {0, ..., p − 1} (where p = |G| is a prime), then xgB is also distributed uniformly on G. Likewise, gB is uniform in Zp*, meaning it is impossible for it to be 0. Therefore xgB is independent of x, and loggxgBB + loggx (mod|G|) and the discrete logarithm is self-reducible.
  • Permanent
    Permanent
    The permanent of a square matrix in linear algebra, is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix...


Given the definition of the permanent
Permanent
The permanent of a square matrix in linear algebra, is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix...

 of a matrix, it is clear that PERM(M) for any n-by-n matrix M is a multivariate polynomial of degree n over the entries in M. Calculating the permanent of a matrix is a difficult computational task—PERM has been shown to be #P-complete (proof
Permanent is sharp-P-complete
In a 1979 paper Leslie Valiant proved that the problem of computing the permanent of a matrix is #P-hard, and remains #P-complete even if the matrix is restricted to have entries that are all 0 or 1. This result is sometimes known as Valiant's theorem and is considered a seminal result in...

). Moreover, the ability to compute PERM(M) for most matrices implies the existence of a random program that computes PERM(M) for all matrices. This demonstrates that PERM is random self-reducible. The discussion below considers the case where the matrix entries are drawn from a finite field Fp for some prime p, and where all arithmetic is performed in that field.

Let X be a random n-by-n matrix with entries from Fp. Since all the entries of any matrix M + kX are linear functions of k, by composing those linear functions with the degree n multivariate polynomial that calculates PERM(M) we get another degree n polynomial on k, which we will call p(k). Clearly, p(0) is equal to the permanent of M.

Suppose we know a program that computes the correct value of PERM(A) for most n-by-n matrices with entries from Fp---specifically, 1 − 1/(3n) of them. Then with probability of approximately two-thirds, we can calculate PERM(M + kX) for k = 1,2,...,n + 1. Once we have those n + 1 values, we can solve for the coefficients of p(k) using interpolation
Polynomial interpolation
In numerical analysis, polynomial interpolation is the interpolation of a given data set by a polynomial: given some points, find a polynomial which goes exactly through these points.- Applications :...

 (remember that p(k) has degree n). Once we know p(k) exactly, we evaluate p(0), which is equal to PERM(M).

If we do so, we run the risk of being wrong 1/3 of the time, but by picking multiple random Xs and repeating the above procedure many times, and only providing the majority winner as an answer, we can drive the error rate down very low.
  • Quadratic residuosity problem
    Quadratic residuosity problem
    The quadratic residuosity problem in computational number theory is the question of distinguishing by calculating the quadratic residues modulo N, where N is a composite number...

  • Inverse-RSA

Consequences

  • If an NP-complete
    NP-complete
    In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

     problem is random self-reducible the polynomial hierarchy
    Polynomial hierarchy
    In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes P, NP and co-NP to oracle machines...

     collapses to Σ3. This eliminates the possibility of non-adaptive uniform random self-reductions.
  • If a CoNP-hard problem is random self-reducible in O(log n/n) then Σ2 = Π2.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK