Einstein notation
Encyclopedia
In mathematics
Mathematics
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...

, especially in applications of linear algebra
Linear algebra
Linear algebra is a branch of mathematics that studies vector spaces, also called linear spaces, along with linear functions that input one vector and output another. Such functions are called linear maps and can be represented by matrices if a basis is given. Thus matrix theory is often...

 to physics
Physics
Physics is a natural science that involves the study of matter and its motion through spacetime, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves.Physics is one of the oldest academic...

, the Einstein notation or Einstein summation convention is a notational convention useful when dealing with coordinate formulae. It was introduced by Albert Einstein
Albert Einstein
Albert Einstein was a German-born theoretical physicist who developed the theory of general relativity, effecting a revolution in physics. For this achievement, Einstein is often regarded as the father of modern physics and one of the most prolific intellects in human history...

 in 1916.

According to this convention, when an index variable appears twice in a single term it implies that we are summing over all of its possible values. In typical applications, the index values are 1,2,3 (representing the three dimensions of physical Euclidean space
Euclidean space
In mathematics, Euclidean space is the Euclidean plane and three-dimensional space of Euclidean geometry, as well as the generalizations of these notions to higher dimensions...

), or 0,1,2,3 or 1,2,3,4 (representing the four dimensions of space-time, or Minkowski space
Minkowski space
In physics and mathematics, Minkowski space or Minkowski spacetime is the mathematical setting in which Einstein's theory of special relativity is most conveniently formulated...

), but they can have any range, even (in some applications) an infinite set. Thus in three dimensions


actually means


The upper indices are not exponents
Exponentiation
Exponentiation is a mathematical operation, written as an, involving two numbers, the base a and the exponent n...

, but instead different axes. Thus, for example, should be read as "x-two", not "x squared", and corresponds to the traditional y-axis. This use of Abstract index notation
Abstract index notation
Abstract index notation is a mathematical notation for tensors and spinors that uses indices to indicate their types, rather than their components in a particular basis. The indices are mere placeholders, not related to any fixed basis and, in particular, are non-numerical...

 is a way of presenting the summation convention so that it is made clear that it is independent of coordinates.

In general relativity
General relativity
General relativity or the general theory of relativity is the geometric theory of gravitation published by Albert Einstein in 1916. It is the current description of gravitation in modern physics...

, a common convention is that the Greek alphabet
Greek alphabet
The Greek alphabet is the script that has been used to write the Greek language since at least 730 BC . The alphabet in its classical and modern form consists of 24 letters ordered in sequence from alpha to omega...

 and the Roman alphabet are used to distinguish whether summing over 1,2,3 or 0,1,2,3 (usually Roman, i, j, ... for 1,2,3 and Greek, , , ... for 0,1,2,3).

Einstein notation can be applied in slightly different ways. Often, each index must be repeated once in an upper (superscript) and once in a lower (subscript) position; however, the convention can be applied more generally to any repeated indices. When dealing with covariant and contravariant vectors, where the indices also indicate the type of vector, the first notation must be used; a covariant vector can only be contracted (summed) with a contravariant vector. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one can work with only subscripts; see below.

Introduction

Example of Einstein notation for a vector:


In Einstein notation, vector indices are superscripts (e.g. ) and covector indices are subscripts (e.g. ). The position of the index has a specific meaning. It is important, of course, not to confuse a superscript with an exponent—all the relations with superscripts and subscripts are linear, they involve no power higher than the first. Here, the superscripted i above the symbol x represents an integer-valued index running from 1 to n.

The virtue of Einstein notation is that it represents the invariant quantities with a simple notation.

The basic idea of Einstein notation is that a vector can form a scalar:


This is typically written as an explicit sum:


This sum is invariant under changes of basis, but the individual terms in the sum are not. This led Einstein to propose the convention that repeated indices imply the sum:


This, and any, scalar is invariant under transformations of basis (see Basis (mathematics)). When the basis is changed, the components of a vector change by a linear transformation described by a matrix.

As for covectors, they change by the inverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is.

Vector representations

In linear algebra
Linear algebra
Linear algebra is a branch of mathematics that studies vector spaces, also called linear spaces, along with linear functions that input one vector and output another. Such functions are called linear maps and can be represented by matrices if a basis is given. Thus matrix theory is often...

, Einstein notation can be used to distinguish between vectors and covectors.

Given a vector space and its dual space
Dual space
In mathematics, any vector space, V, has a corresponding dual vector space consisting of all linear functionals on V. Dual vector spaces defined on finite-dimensional vector spaces can be used for defining tensors which are studied in tensor algebra...

 :

Vectors have lower indices , and components of vectors (i.e. coordinates of vector endpoints) have upper indices . So a vector with an index of is expressed as:
where is a basis for .

Covectors have upper indices , and components of covectors have lower indices . So a covector with an index of is expressed as:
where is the dual basis for .

Note that is a vector, is a covector, and and are scalars. The product returns a vector or covector , respectively. Since basis vectors are given lower indices and coordinates are labeled with upper indices , summation notation suggests pairing them (in the obvious way) to express the vector.

In a given basis, the coefficient of (which is ) is the value of the covector in the corresponding dual basis: .

In terms of covariance and contravariance of vectors, upper indices represent components of contravariant vectors (vector
Coordinate vector
In linear algebra, a coordinate vector is an explicit representation of a vector in an abstract vector space as an ordered list of numbers or, equivalently, as an element of the coordinate space Fn....

s), while lower indices represent 'components' of covariant vectors (covectors): they transform covariantly (resp., contravariantly) with respect to change of basis. In recognition of this fact, the following notation uses the same letter both for a (co)vector and its components, as in:
Here means the components of the vector , but it does not mean "the covector ". It is which is the covector, and are its components.

Mnemonics

In the above example, vectors are represented as (n,1) matrices "column vectors", while covectors are represented as (1,n) matrices "row covectors". The opposite convention is also used. For example, the DirectX API uses row vectors.

When using the column vector convention
  • "Upper indices go up to down; lower indices go left to right"
  • You can stack vectors (column matrices) side-by-side:

Hence the lower index indicates which column you are in.
  • You can stack covectors (row matrices) top-to-bottom:

Hence the upper index indicates which row you are in.

Superscripts and subscripts vs. only subscripts

In the presence of a non-degenerate form (an isomorphism ,
for instance a Riemannian metric or Minkowski metric),
one can raise and lower indices
Raising and lowering indices
In mathematics and mathematical physics, given a tensor on a manifold M, in the presence of a nonsingular form on M , one can raise or lower indices: change a type tensor to a tensor or to a tensor...

.

A basis gives such a form (via the dual basis), hence when working on
with a fixed basis, one can work with just subscripts.

However, if one changes coordinates, the way that coefficients change depends on the variance of the object, and one cannot ignore the distinction;
see covariance and contravariance of vectors.

Common operations in this notation

In Einstein notation, the usual element reference for the th row and th column of matrix becomes . We can then write the following operations in Einstein notation as follows.

Inner product

Given a row vector and a column vector of the same size, we can take the inner product , which is a scalar: it's evaluating the covector on the vector.

Multiplication of a vector by a matrix

Given a matrix and a (column) vector , the coefficients of the product are given by .

Similarly, is equivalent to .

But, be aware that: notations like are somewhat misleading, then they are refined to

to keep track of which is column and which is row. In the notations: , the index (the first index) is row, and the index (the second index) is column.

Matrix multiplication

We can represent matrix multiplication
Matrix multiplication
In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. If A is an n-by-m matrix and B is an m-by-p matrix, the result AB of their multiplication is an n-by-p matrix defined only if the number of columns m of the left matrix A is the...

 as:


This expression is equivalent to the more conventional (and less compact) notation:

Trace

Given a square matrix , summing over a common index yields the trace
Trace (linear algebra)
In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main diagonal of A, i.e.,...

.

Outer product

The outer product
Outer product
In linear algebra, the outer product typically refers to the tensor product of two vectors. The result of applying the outer product to a pair of vectors is a matrix...

 of the column vector u by the row vector '' yields an M × N matrix A:


In Einstein notation, we have:


Since i and j represent two different indices, and in this case over two different ranges M and N respectively, the indices are not eliminated by the multiplication. Both indices survive the multiplication to become the two indices of the newly-created matrix A of rank 1.

Coefficients on tensors and related

Given a tensor field
Tensor field
In mathematics, physics and engineering, a tensor field assigns a tensor to each point of a mathematical space . Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in materials, and in numerous applications in the physical...

 and a basis (of linearly independent vector fields),
the coefficients of the tensor field in a basis can be computed by evaluating on a suitable combination of the basis and dual basis, and inherits the correct indexing.
We list notable examples.

Throughout, let be a basis of vector fields (a moving frame
Moving frame
In mathematics, a moving frame is a flexible generalization of the notion of an ordered basis of a vector space often used to study the extrinsic differential geometry of smooth manifolds embedded in a homogeneous space.-Introduction:...

).
  • (covariant) metric tensor
    Metric tensor
    In the mathematical field of differential geometry, a metric tensor is a type of function defined on a manifold which takes as input a pair of tangent vectors v and w and produces a real number g in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean...

  • (contravariant) metric tensor
    Metric tensor
    In the mathematical field of differential geometry, a metric tensor is a type of function defined on a manifold which takes as input a pair of tangent vectors v and w and produces a real number g in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean...

  • Torsion tensor
    Torsion tensor
    In differential geometry, the notion of torsion is a manner of characterizing a twist or screw of a moving frame around a curve. The torsion of a curve, as it appears in the Frenet-Serret formulas, for instance, quantifies the twist of a curve about its tangent vector as the curve evolves In the...

     (using the below)

which follows from the formula
  • Riemann curvature tensor
    Riemann curvature tensor
    In the mathematical field of differential geometry, the Riemann curvature tensor, or Riemann–Christoffel tensor after Bernhard Riemann and Elwin Bruno Christoffel, is the most standard way to express curvature of Riemannian manifolds...



This also applies for some operations that are not tensorial, for instance:
  • Christoffel symbols

where is the covariant derivative
Covariant derivative
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given...

.
Equivalently,
  • commutator coefficients

where is the Lie bracket
Lie bracket
Lie bracket can refer to:*A bilinear binary operation defined on elements of a Lie algebra*Lie bracket of vector fields...

.
Equivalently,

Vector dot product

In mechanics and engineering, vectors in 3D space are often described in relation to orthogonal
Orthogonality
Orthogonality occurs when two things can vary independently, they are uncorrelated, or they are perpendicular.-Mathematics:In mathematics, two vectors are orthogonal if they are perpendicular, i.e., they form a right angle...

 unit vectors i, j and k.


If the basis vectors i, j, and k are instead expressed as e1, e2, and e3, a vector can be expressed in terms of a summation:


In Einstein notation, the summation symbol is omitted since the index i is repeated once as an upper index and once as a lower index, and we simply write


Using e1, e2, and e3 instead of i, j, and k, together with Einstein notation, we obtain a concise algebraic presentation of vector and tensor
Tensor
Tensors are geometric objects that describe linear relations between vectors, scalars, and other tensors. Elementary examples include the dot product, the cross product, and linear maps. Vectors and scalars themselves are also tensors. A tensor can be represented as a multi-dimensional array of...

 equations. For example,


Since

where is the Kronecker delta, which is equal to 1 when i = j, and 0 otherwise, we find
One can use to lower indices of the vectors; namely, and . Then
Note that, despite for any fixed , it is incorrect to write
since on the right hand side the index is repeated both times as an upper index and so there is no summation over according to the Einstein convention. Rather, one should explicitly write the summation:

Vector cross product

For the cross product
Cross product
In mathematics, the cross product, vector product, or Gibbs vector product is a binary operation on two vectors in three-dimensional space. It results in a vector which is perpendicular to both of the vectors being multiplied and normal to the plane containing them...

,


where and , with the Levi-Civita symbol
Levi-Civita symbol
The Levi-Civita symbol, also called the permutation symbol, antisymmetric symbol, or alternating symbol, is a mathematical symbol used in particular in tensor calculus...

 defined by:


One then recovers


from.
In other words, if , then , so that .

Abstract definitions

In the traditional usage, one has in mind a vector space
Vector space
A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be real numbers, but one may also consider vector spaces with scalar multiplication by complex...

   with finite dimension n, and a specific basis
Basis (linear algebra)
In linear algebra, a basis is a set of linearly independent vectors that, in a linear combination, can represent every vector in a given vector space or free module, or, more simply put, which define a "coordinate system"...

 of . We can write the basis vectors as e1, e2, ..., en. Then if '' is a vector in , it has coordinates relative to this basis.

The basic rule is:

In this expression, it was assumed that the term on the right side was to be summed as i  goes from 1 to n, because the index i does not appear on both sides of the expression. (Or, using Einstein's convention, because the index i  appeared twice.)

An index that is summed over is a summation index. Here, the i is known as a summation index. It is also known as a dummy index since the result is not dependent on it; thus we could also write, for example:


An index that is not summed over is a free index and should be found in each term of the equation or formula if it appears in any term. Compare dummy indices and free indices with free variables and bound variables
Free variables and bound variables
In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a free variable is a notation that specifies places in an expression where substitution may take place...

.

The value of the Einstein convention is that it applies to other vector spaces built from   using the tensor product
Tensor product
In mathematics, the tensor product, denoted by ⊗, may be applied in different contexts to vectors, matrices, tensors, vector spaces, algebras, topological vector spaces, and modules, among many other structures or objects. In each case the significance of the symbol is the same: the most general...

 and duality
Dual space
In mathematics, any vector space, V, has a corresponding dual vector space consisting of all linear functionals on V. Dual vector spaces defined on finite-dimensional vector spaces can be used for defining tensors which are studied in tensor algebra...

. For example, , the tensor product of   with itself, has a basis consisting of tensors of the form . Any tensor in can be written as:.

V*, the dual of , has a basis e1, e2, ..., en which obeys the rule
Here δ is the Kronecker delta, so is 1 if i =j  and 0 otherwise.

As
the row-column coordinates on a matrix correspond to the upper-lower indices on the tensor product.

Examples

Einstein summation is clarified with the help of a few simple examples. Consider four-dimensional spacetime, where indices run from 0 to 3:



The above example is one of contraction
Tensor contraction
In multilinear algebra, a tensor contraction is an operation on one or more tensors that arises from the natural pairing of a finite-dimensional vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tensor caused by applying the summation...

, a common tensor operation. The tensor becomes a new tensor by summing over the first upper index and the lower index. Typically the resulting tensor is renamed with the contracted indices removed:


For a familiar example, consider the dot product of two vectors a and b. The dot product is defined simply as summation over the indices of a and b:


which is our familiar formula for the vector dot product. Remember it is sometimes necessary to change the components of a in order to lower its index; however, this is not necessary in Euclidean space, or any space with a metric
Metric tensor
In the mathematical field of differential geometry, a metric tensor is a type of function defined on a manifold which takes as input a pair of tangent vectors v and w and produces a real number g in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean...

 equal to its inverse metric (e.g., flat spacetime).

See also

  • Abstract index notation
    Abstract index notation
    Abstract index notation is a mathematical notation for tensors and spinors that uses indices to indicate their types, rather than their components in a particular basis. The indices are mere placeholders, not related to any fixed basis and, in particular, are non-numerical...

  • Bra-ket notation
    Bra-ket notation
    Bra-ket notation is a standard notation for describing quantum states in the theory of quantum mechanics composed of angle brackets and vertical bars. It can also be used to denote abstract vectors and linear functionals in mathematics...

  • Penrose graphical notation
    Penrose graphical notation
    In mathematics and physics, Penrose graphical notation or tensor diagram notation is a visual depiction of multilinear functions or tensors proposed by Roger Penrose. A diagram in the notation consists of several shapes linked together by lines, much like tinker toys...

  • Kronecker delta
  • Levi-Civita symbol
    Levi-Civita symbol
    The Levi-Civita symbol, also called the permutation symbol, antisymmetric symbol, or alternating symbol, is a mathematical symbol used in particular in tensor calculus...

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK