Stationary phase approximation
Encyclopedia
In mathematics
Mathematics
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...

, the stationary phase approximation is a basic principle of asymptotic analysis
Asymptotic analysis
In mathematical analysis, asymptotic analysis is a method of describing limiting behavior. The methodology has applications across science. Examples are...

, applying to oscillatory integrals


taken over n-dimensional space Rn where i is the imaginary unit
Imaginary unit
In mathematics, the imaginary unit allows the real number system ℝ to be extended to the complex number system ℂ, which in turn provides at least one root for every polynomial . The imaginary unit is denoted by , , or the Greek...

. Here f and g are real-valued smooth function
Smooth function
In mathematical analysis, a differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives. Functions that have derivatives of all orders are called smooth.Most of...

s. The role of g is to ensure convergence; that is, g is a test function. The large real parameter k is considered in the limit as k → ∞.

Basics

The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly-varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times.

An example

Consider a function


The phase term in this function, is "stationary" when


or equivalently,


Solutions to this equation yield dominant frequencies for a given and . If we expand in a Taylor series
Taylor series
In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point....

 about and neglect terms of order higher than ,


When is relatively large, even a small difference will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we double the real contribution from the positive frequencies of the transform to account for the negative frequencies,


This integrates to

Reduction steps

The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical point
Critical point (mathematics)
In calculus, a critical point of a function of a real variable is any value in the domain where either the function is not differentiable or its derivative is 0. The value of the function at a critical point is a critical value of the function...

s of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann-Lebesgue lemma
Riemann-Lebesgue lemma
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, is of importance in harmonic analysis and asymptotic analysis....

.

The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by
x12 + ….+ xj2 − xj + 12 − xj + 22 − … − xn2.


The value of j is given by the signature of the Hessian matrix
Hessian matrix
In mathematics, the Hessian matrix is the square matrix of second-order partial derivatives of a function; that is, it describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named...

 of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval [−1,1] and quickly tending to 0 outside it. Take
g(x) = Π h(xi).

Then Fubini's theorem
Fubini's theorem
In mathematical analysis Fubini's theorem, named after Guido Fubini, is a result which gives conditions under which it is possible to compute a double integral using iterated integrals. As a consequence it allows the order of integration to be changed in iterated integrals.-Theorem...

 reduces I(k) to a product of integrals over the real line like


with f(x) = x2 or −x2. The case with the minus sign is the complex conjugate
Complex conjugate
In mathematics, complex conjugates are a pair of complex numbers, both having the same real part, but with imaginary parts of equal magnitude and opposite signs...

 of the case with the plus sign, so there is essentially one required asymptotic estimate.

In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques. See for example Airy function
Airy function
In the physical sciences, the Airy function Ai is a special function named after the British astronomer George Biddell Airy...

.

One-dimensional case

The essential statement is this one:


In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range [−∞,∞]. Therefore it is the question of estimating away the integral over, say, [1,∞].

This is the model for all one-dimensional integrals I(k) with f having a single non-degenerate critical point at which f has second derivative
Second derivative
In calculus, the second derivative of a function ƒ is the derivative of the derivative of ƒ. Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, the second derivative of the position of a vehicle with respect to time is...

> 0. In fact the model case has second derivative 2 at 0. In order to scale using k, observe that replacing k by ck
where c is constant is the same as scaling x by √c. It follows that for general values of f″(0) > 0, the factor √(π/k) becomes


For f″(0) < 0 one uses the complex conjugate formula, as was mentioned before.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK