In
linear algebraLinear algebra is a branch of mathematics that studies vector spaces, also called linear spaces, along with linear functions that input one vector and output another. Such functions are called linear maps and can be represented by matrices if a basis is given. Thus matrix theory is often...
an nbyn (square)
matrixIn mathematics, a matrix is a rectangular array of numbers, symbols, or expressions. The individual items in a matrix are called its elements or entries. An example of a matrix with six elements isMatrices of the same size can be added or subtracted element by element...
A is called invertible (some authors use nonsingular or nondegenerate) if there exists an nbyn matrix B such that
where I
_{n} denotes the nbyn
identity matrixIn linear algebra, the identity matrix or unit matrix of size n is the n×n square matrix with ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial or can be trivially determined by the context...
and the multiplication used is ordinary
matrix multiplicationIn mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. If A is an nbym matrix and B is an mbyp matrix, the result AB of their multiplication is an nbyp matrix defined only if the number of columns m of the left matrix A is the...
. If this is the case, then the matrix B is uniquely determined by A and is called the inverse of A, denoted by A
^{−1}. It follows from the theory of matrices that if
for finite square matrices A and B, then also
Nonsquare matrices (mbyn matrices for which m ≠ n) do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is mbyn and the
rankThe column rank of a matrix A is the maximum number of linearly independent column vectors of A. The row rank of a matrix A is the maximum number of linearly independent row vectors of A...
of A is equal to n, then A has a left inverse: an nbym matrix B such that BA = I. If A has rank m, then it has a right inverse: an nbym matrix B such that AB = I.
A square matrix that is not invertible is called singular or degenerate. A square matrix is singular
if and only ifIn logic and related fields such as mathematics and philosophy, if and only if is a biconditional logical connective between statements....
its
determinantIn linear algebra, the determinant is a value associated with a square matrix. It can be computed from the entries of the matrix by a specific arithmetic expression, while other ways to determine its value exist as well...
is 0. Singular matrices are rare in the sense that if you pick a random square matrix, it will
almost surelyIn probability theory, one says that an event happens almost surely if it happens with probability one. The concept is analogous to the concept of "almost everywhere" in measure theory...
not be singular.
While the most common case is that of matrices over the
realIn mathematics, a real number is a value that represents a quantity along a continuum, such as 5 , 4/3 , 8.6 , √2 and π...
or
complexA complex number is a number consisting of a real part and an imaginary part. Complex numbers extend the idea of the onedimensional number line to the twodimensional complex plane by using the number line for the real part and adding a vertical axis to plot the imaginary part...
numbers, all these definitions can be given for matrices over any
commutative ringIn mathematics, a ring is an algebraic structure consisting of a set together with two binary operations usually called addition and multiplication, where the set is an abelian group under addition and a semigroup under multiplication such that multiplication distributes over addition...
. However, in this case the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a much stricter requirement than being nonzero. The conditions for existence of leftinverse resp. rightinverse are more complicated since a notion of rank does not exist over rings.
Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A.
Properties
Let A be a square n by n matrix over a
fieldIn abstract algebra, a field is a commutative ring whose nonzero elements form a group under multiplication. As such it is an algebraic structure with notions of addition, subtraction, multiplication, and division, satisfying certain axioms...
K (for example the field R of real numbers). The following statements are equivalent:
 A is invertible.
 A is rowequivalent to the nbyn identity matrix
In linear algebra, the identity matrix or unit matrix of size n is the n×n square matrix with ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial or can be trivially determined by the context...
I_{n}.
 A is columnequivalent to the nbyn identity matrix
In linear algebra, the identity matrix or unit matrix of size n is the n×n square matrix with ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial or can be trivially determined by the context...
I_{n}.
 A has n pivot positions.
 det
In linear algebra, the determinant is a value associated with a square matrix. It can be computed from the entries of the matrix by a specific arithmetic expression, while other ways to determine its value exist as well...
A ≠ 0. In general, a square matrix over a commutative ringIn ring theory, a branch of abstract algebra, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra....
is invertible if and only if its determinantIn linear algebra, the determinant is a value associated with a square matrix. It can be computed from the entries of the matrix by a specific arithmetic expression, while other ways to determine its value exist as well...
is a unitIn mathematics, an invertible element or a unit in a ring R refers to any element u that has an inverse element in the multiplicative monoid of R, i.e. such element v that...
in that ring.
 rank
The column rank of a matrix A is the maximum number of linearly independent column vectors of A. The row rank of a matrix A is the maximum number of linearly independent row vectors of A...
A = n.
 The equation Ax = 0 has only the trivial solution x = 0 (i.e., Null
In linear algebra, the kernel or null space of a matrix A is the set of all vectors x for which Ax = 0. The kernel of a matrix with n columns is a linear subspace of ndimensional Euclidean space...
A = {0})
 The equation Ax = b has exactly one solution for each b in K^{n}, (x ≠ 0).
 The columns of A are linearly independent
In linear algebra, a family of vectors is linearly independent if none of them can be written as a linear combination of finitely many other vectors in the collection. A family of vectors which is not linearly independent is called linearly dependent...
.
 The columns of A span K^{n} (i.e. Col A = K^{n}).
 The columns of A form a basis of K^{n}.
 The linear transformation mapping x to Ax is a bijection
A bijection is a function giving an exact pairing of the elements of two sets. A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are finite sets, then the existence of a bijection means they have the same number of elements...
from K^{n} to K^{n}.
 There is an n by n matrix B such that AB = I_{n} = BA.
 The transpose
In linear algebra, the transpose of a matrix A is another matrix AT created by any one of the following equivalent actions:...
A^{T} is an invertible matrix (hence rows of A are linearly independentIn linear algebra, a family of vectors is linearly independent if none of them can be written as a linear combination of finitely many other vectors in the collection. A family of vectors which is not linearly independent is called linearly dependent...
, span K^{n}, and form a basis of K^{n}).
 The number 0 is not an eigenvalue of A.
 The matrix A can be expressed as a finite product of elementary matrices.
Furthermore, the following properties hold for an invertible matrix A:
 (A^{–1})^{–1} = A;
 (kA)^{–1} = k^{–1}A^{–1} for nonzero scalar k;
 (A^{T})^{–1} = (A^{–1})^{T};
 For any invertible nbyn matrices A and B, (AB)^{–1} = B^{–1}A^{–1}. More generally, if A_{1},...,A_{k} are invertible nbyn matrices, then (A_{1}A_{2}⋯A_{k–1}A_{k})^{–1} = A_{k}^{–1}A_{k–1}^{–1}⋯A_{2}^{–1}A_{1}^{–1};
 det(A^{–1}) = det(A)^{–1}.
A matrix that is its own inverse, i.e. A = A
^{1} and A
^{2} = I, is called an involution.
Density
Over the field of real numbers, the set of singular nbyn matrices, considered as a subset of R
^{n×n}, is a
null setIn mathematics, a null set is a set that is negligible in some sense. For different applications, the meaning of "negligible" varies. In measure theory, any set of measure 0 is called a null set...
, i.e., has
LebesgueIn measure theory, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of ndimensional Euclidean space. For n = 1, 2, or 3, it coincides with the standard measure of length, area, or volume. In general, it is also called...
measure zero. This is true because singular matrices are the roots of the polynomial function in the entries of the matrix given by the
determinantIn linear algebra, the determinant is a value associated with a square matrix. It can be computed from the entries of the matrix by a specific arithmetic expression, while other ways to determine its value exist as well...
. Thus in the language of measure theory,
almost allIn mathematics, the phrase "almost all" has a number of specialised uses."Almost all" is sometimes used synonymously with "all but finitely many" or "all but a countable set" ; see almost....
nbyn matrices are invertible.
Furthermore the nbyn invertible matrices are a
denseIn topology and related areas of mathematics, a subset A of a topological space X is called dense if any point x in X belongs to A or is a limit point of A...
open setThe concept of an open set is fundamental to many areas of mathematics, especially pointset topology and metric topology. Intuitively speaking, a set U is open if any point x in U can be "moved" a small amount in any direction and still be in the set U...
in the
topological spaceTopological spaces are mathematical structures that allow the formal definition of concepts such as convergence, connectedness, and continuity. They appear in virtually every branch of modern mathematics and are a central unifying notion...
of all nbyn matrices. Equivalently, the set of singular matrices is
closedIn geometry, topology, and related branches of mathematics, a closed set is a set whose complement is an open set. In a topological space, a closed set can be defined as a set which contains all its limit points...
and nowhere dense in the space of nbyn matrices.
In practice however, one may encounter noninvertible matrices. And in
numerical calculationsNumerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis ....
, matrices which are invertible, but close to a noninvertible matrix, can still be problematic; such matrices are said to be illconditioned.
Gaussian elimination
Gauss–Jordan eliminationIn linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards....
is an
algorithmIn mathematics and computer science, an algorithm is an effective method expressed as a finite list of welldefined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
that can be used to determine whether a given matrix is invertible and to find the inverse. An alternative is the
LU decompositionIn linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...
which generates upper and lower triangular matrices which are easier to invert. For special purposes, it may be convenient to invert matrices by treating m·nbym·n matrices as mbym matrices of nbyn matrices, and applying one or another formula recursively (other sized matrices can be padded out with dummy rows and columns). For other purposes, a variant of
Newton's methodIn numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a realvalued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
may be convenient. Newton's method is particularly useful when dealing with families of related matrices: sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix.
Newton's methodIn numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a realvalued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
is also useful for "touch up" corrections to the
Gauss–Jordan algorithmIn linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards....
which has been contaminated by small errors due to
imperfect computer arithmeticA roundoff error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations and/or algorithms, especially when using finitely many...
.
Analytic solution
Writing the transpose of the matrix of cofactors, known as an
adjugate matrixIn linear algebra, the adjugate or classical adjoint of a square matrix is a matrix that plays a role similar to the inverse of a matrix; it can however be defined for any square matrix without the need to perform any divisions....
, can also be an efficient way to calculate the inverse of small matrices, but this recursive method is inefficient for large matrices. To determine the inverse, we calculate a matrix of cofactors:
where A is the
determinantIn linear algebra, the determinant is a value associated with a square matrix. It can be computed from the entries of the matrix by a specific arithmetic expression, while other ways to determine its value exist as well...
of A, C
_{ij} is the matrix of cofactors, and C
^{T} represents the matrix
transposeIn linear algebra, the transpose of a matrix A is another matrix AT created by any one of the following equivalent actions:...
.
For most practical applications, it is not necessary to invert a matrix to solve a
system of linear equations; however, for a unique solution, it is necessary that the matrix involved be invertible.
Decomposition techniques like
LU decompositionIn linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...
are much faster than inversion, and various fast algorithms for special classes of linear systems have also been developed.
Inversion of 2×2 matrices
The cofactor equation listed above yields the following result for 2×2 matrices. Inversion of these matrices can be done easily as follows:
This is possible because 1/(adbc) is the reciprocal of the determinant of the matrix in question, and the same strategy could be used for other matrix sizes.
Inversion of 3×3 matrices
A computationally efficient 3x3 matrix inversion is given by
where the determinant of A can be computed by applying the
rule of SarrusSarrus' rule or Sarrus' scheme is a method and a memorization scheme to compute the determinant of a 3×3 matrix. It is named after the French mathematician Pierre Frédéric Sarrus....
as follows:
If the determinant is nonzero, the matrix is invertible, with the elements of the above matrix on the right side given by
The general 3×3 inverse can be expressed concisely in terms of the
cross productIn mathematics, the cross product, vector product, or Gibbs vector product is a binary operation on two vectors in threedimensional space. It results in a vector which is perpendicular to both of the vectors being multiplied and normal to the plane containing them...
and
triple productIn mathematics, the triple product is a product of three vectors. The name "triple product" is used for two different products, the scalarvalued scalar triple product and, less often, the vectorvalued vector triple product....
:
If a matrix
(consisting of three column vectors,
,
, and
) is invertible, its inverse is given by
where
is a column vector and
is a row vector. Note that
is equal to the triple product of
,
, and
—the volume of the
parallelepipedIn geometry, a parallelepiped is a threedimensional figure formed by six parallelograms. By analogy, it relates to a parallelogram just as a cube relates to a square. In Euclidean geometry, its definition encompasses all four concepts...
formed by the rows or columns:

The correctness of the formula can be checked by using cross and tripleproduct properties and by noting that for groups, left and right inverses always coincide. Intuitively, because of the cross products, each row of
is orthogonal to the noncorresponding two columns of
(causing the offdiagonal terms of
be zero). Dividing by

causes the diagonal elements of
to be unity. For example, the first diagonal is:
Blockwise inversion
Matrices can also be inverted blockwise by using the following analytic inversion formula:
where A, B, C and D are
matrix subblocksIn the mathematical discipline of matrix theory, a block matrix or a partitioned matrix is a matrix broken into sections called blocks. Looking at it another way, the matrix is written in terms of smaller matrices. We group the rows and columns into adjacent 'bunches'. A partition is the rectangle...
of arbitrary size. (A and D must, of course, be square, so that they can be inverted. Furthermore, this is true if and only if A and D−CA
^{−1}B are nonsingular
). This strategy is particularly advantageous if A is diagonal and D−CA
^{−1}B (the
Schur complementIn linear algebra and the theory of matrices,the Schur complement of a matrix block is defined as follows.Suppose A, B, C, D are respectivelyp×p, p×q, q×p...
of A) is a small matrix, since they are the only matrices requiring inversion. This technique was reinvented several times and is due to Hans Boltz (1923), who used it for the inversion of geodetic matrices, and
Tadeusz BanachiewiczTadeusz Banachiewicz was a Polish astronomer, mathematician and geodesist....
(1937), who generalized it and proved its correctness.
The
nullity theoremThe nullity theorem is a mathematical theorem about the inverse of a partitioned matrix, which states that the nullity of a block in a matrix equals the nullity of the complementary block in its inverse matrix. Here, the nullity is the dimension of the kernel...
says that the nullity of A equals the nullity of the subblock in the lower right of the inverse matrix, and that the nullity of B equals the nullity of the subblock in the upper right of the inverse matrix.
The inversion procedure that led to Equation (1) performed matrix block operations that operated on C and D first. Instead, if A and B are operated on first, and provided D and A−BD
^{−1}C are nonsingular
, the result is
Equating Equations (1) and (2) leads to
where Equation (3) is the matrix inversion lemma, which is equivalent to the
binomial inverse theoremIn mathematics, the Binomial Inverse Theorem is useful for expressing matrix inverses in different ways.If A, U, B, V are matrices of sizes p×p, p×q, q×q, q×p, respectively, then...
.
By Neumann series
If a matrix A has the property that
then A is nonsingular and its inverse may be expressed by a
Neumann series:
Truncating the sum results in an "approximate" inverse which may be useful as a
preconditionerIn mathematics, preconditioning is a procedure of an application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solution. Preconditioning is typically related to reducing a condition number of the problem...
.
More generally, if A is "near" the invertible matrix X in the sense that
then A is nonsingular and its inverse is
If it is also the case that AX has
rankThe column rank of a matrix A is the maximum number of linearly independent column vectors of A. The row rank of a matrix A is the maximum number of linearly independent row vectors of A...
1 then this simplifies to
Derivative of the matrix inverse
Suppose that the invertible matrix A depends on a parameter t. Then the derivative of the inverse of A with respect to t is given by
To derive the above expression for the derivative of the inverse of A, one can differentiate the definition of the matrix inverse
and then solve for the inverse of A:

Subtracting from both sides of the above and multiplying on the right by gives the correct expression for the derivative of the inverse:

Similarly, if is a small number then

Moore–Penrose pseudoinverse
Some of the properties of inverse matrices are shared by Moore–Penrose pseudoinverses, which can be defined for any mbyn matrix.
Matrix inverses in realtime simulations
Matrix inversion plays a significant role in computer graphicsComputer graphics are graphics created using computers and, more generally, the representation and manipulation of image data by a computer with help from specialized software and hardware....
, particularly in 3D graphics rendering and 3D simulations. Examples include screentoworld ray casting, worldtosubspacetoworld object transformations, and physical simulations.
See also
 Binomial inverse theorem
In mathematics, the Binomial Inverse Theorem is useful for expressing matrix inverses in different ways.If A, U, B, V are matrices of sizes p×p, p×q, q×q, q×p, respectively, then...
 LU decomposition
In linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...
 Matrix decomposition
In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems. Example :...
 Pseudoinverse
In mathematics, and in particular linear algebra, a pseudoinverse of a matrix is a generalization of the inverse matrix. The most widely known type of matrix pseudoinverse is the Moore–Penrose pseudoinverse, which was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951 and...
 Moore–Penrose pseudoinverse
 Singular value decomposition
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix, with many useful applications in signal processing and statistics....
 Woodbury matrix identity
In mathematics , the Woodbury matrix identity, named after Max A. Woodbury says that the inverse of a rankk correction of some matrix can be computed by doing a rankk correction to the inverse of the original matrix...
External links