QR algorithm
Encyclopedia
In numerical linear algebra
Numerical linear algebra
Numerical linear algebra is the study of algorithms for performing linear algebra computations, most notably matrix operations, on computers. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, Telecommunication, computational...

, the QR algorithm is an eigenvalue algorithm
Eigenvalue algorithm
In linear algebra, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.-Characteristic polynomial:...

: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix
Matrix (mathematics)
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions. The individual items in a matrix are called its elements or entries. An example of a matrix with six elements isMatrices of the same size can be added or subtracted element by element...

. The QR transformation was developed in the late 1950s by John G.F. Francis
John G.F. Francis
John G.F. Francis is an English computer scientist, who in 1961 published the QR algorithm for computing the eigenvalues and eigenvectors of matrices, which has been named as one of the ten most important algorithms of the twentieth century. The algorithm was also proposed independently by Vera N...

 (England) and by Vera N. Kublanovskaya (USSR), working independently. The basic idea is to perform a QR decomposition
QR decomposition
In linear algebra, a QR decomposition of a matrix is a decomposition of a matrix A into a product A=QR of an orthogonal matrix Q and an upper triangular matrix R...

, writing the matrix as a product of an orthogonal matrix
Orthogonal matrix
In linear algebra, an orthogonal matrix , is a square matrix with real entries whose columns and rows are orthogonal unit vectors ....

 and an upper triangular matrix
Triangular matrix
In the mathematical discipline of linear algebra, a triangular matrix is a special kind of square matrix where either all the entries below or all the entries above the main diagonal are zero...

, multiply the factors in the other order, and iterate.

The practical QR algorithm

Formally, let A be a real matrix of which we want to compute the eigenvalues, and let A0:=A. At the k-th step (starting with k = 0), we compute the QR decomposition Ak=QkRk where Qk is an orthogonal matrix
Orthogonal matrix
In linear algebra, an orthogonal matrix , is a square matrix with real entries whose columns and rows are orthogonal unit vectors ....

 and Rk is an upper triangular matrix. We then form Ak+1 = RkQk. Note that
so all the Ak are similar and hence they have the same eigenvalues. The algorithm is numerically stable
Numerical stability
In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm....

 because it proceeds by orthogonal similarity transforms.

Under certain conditions, the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the Gershgorin circle theorem
Gershgorin circle theorem
In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Belarusian mathematician Semyon Aranovich Gershgorin in 1931. The spelling of S. A...

 provides a bound on the error.

In this crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix A to upper Hessenberg form (which costs arithmetic operations using a technique based on Householder reduction
Householder transformation
In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. Householder transformations are widely used in numerical linear algebra, to perform QR decompositions and in the first step of the QR algorithm...

), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm.

If the original matrix is symmetric, then the upper Hessenberg matrix is also symmetric and thus tridiagonal, and so are all the Ak. This procedure costs arithmetic operations using a technique based on Householder reduction. Determining the QR decomposition of a symmetric tridiagonal matrix costs operations.

The rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.

The implicit QR algorithm

In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce. The matrix is first brought to upper Hessenberg form as in the explicit version; then, at each step, the first column of is transformed via a small-size Householder similarity transformation to the first column of (or ), where , of degree , is the polynomial that defines the shifting strategy (often , where and are the two eigenvalues of the trailing principal submatrix of , the so-called implicit double-shift). Then successive Householder transformation of size are performed in order to return the working matrix to upper Hessenberg form. This operation is known as bulge chasing, due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of is sufficiently small.

Renaming proposal

Since in the modern implicit version of the procedure no QR decomposition
QR decomposition
In linear algebra, a QR decomposition of a matrix is a decomposition of a matrix A into a product A=QR of an orthogonal matrix Q and an upper triangular matrix R...

s are explicitly performed, some authors, for instance Watkins, suggested changing its name to Francis algorithm. Golub
Gene H. Golub
Gene Howard Golub , Fletcher Jones Professor of Computer Science at Stanford University, was one of the preeminent numerical analysts of his generation....

 and Van Loan
Charles F. Van Loan
Charles Francis Van Loan is a professor of computer science and the Joseph C. Ford Professor of Engineering at Cornell University, known for his expertise in numerical analysis, especially matrix computations.-Biography:...

 use the term Francis QR step.

Interpretation and convergence

The QR algorithm can be seen as a more sophisticated variation of the basic "power" eigenvalue algorithm
Eigenvalue algorithm
In linear algebra, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.-Characteristic polynomial:...

. Recall that the power algorithm repeatedly multiplies A times a single vector, normalizing after each iteration. The vector converges to an eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize (and orthogonalize). For a symmetric matrix A, upon convergence, AQ = , where Λ is the diagonal matrix of eigenvalues to which A converged, and where Q is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of Q are the eigenvectors.

Other variants

One variant of the QR algorithm, the Golub-Kahan-Reinsch algorithm starts with reducing a general matrix into a bidiagonal one. This variant of the QR algorithm for the computation of eigenvalues, which was first described by . The LAPACK
LAPACK
-External links:* : a modern replacement for PLAPACK and ScaLAPACK* on Netlib.org* * * : a modern replacement for LAPACK that is MultiGPU ready* on Sourceforge.net* * optimized LAPACK for Solaris OS on SPARC/x86/x64 and Linux* * *...

 subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small . Together with a first step using Householder reflections and, if appropriate, QR decomposition
QR decomposition
In linear algebra, a QR decomposition of a matrix is a decomposition of a matrix A into a product A=QR of an orthogonal matrix Q and an upper triangular matrix R...

, this forms the DGESVD routine for the computation of the singular value decomposition
Singular value decomposition
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix, with many useful applications in signal processing and statistics....

.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK