Orthonormality
Encyclopedia
In linear algebra
Linear algebra
Linear algebra is a branch of mathematics that studies vector spaces, also called linear spaces, along with linear functions that input one vector and output another. Such functions are called linear maps and can be represented by matrices if a basis is given. Thus matrix theory is often...

, two vectors
Vector space
A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be real numbers, but one may also consider vector spaces with scalar multiplication by complex...

 in an inner product space
Inner product space
In mathematics, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors...

 are orthonormal if they are orthogonal
Orthogonality
Orthogonality occurs when two things can vary independently, they are uncorrelated, or they are perpendicular.-Mathematics:In mathematics, two vectors are orthogonal if they are perpendicular, i.e., they form a right angle...

 and both of unit length. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis
Basis (linear algebra)
In linear algebra, a basis is a set of linearly independent vectors that, in a linear combination, can represent every vector in a given vector space or free module, or, more simply put, which define a "coordinate system"...

 is called an orthonormal basis
Orthonormal basis
In mathematics, particularly linear algebra, an orthonormal basis for inner product space V with finite dimension is a basis for V whose vectors are orthonormal. For example, the standard basis for a Euclidean space Rn is an orthonormal basis, where the relevant inner product is the dot product of...

.

Intuitive overview

The construction of orthogonality of vectors is motivated by a desire to extend the intuitive notion of perpendicular vectors to higher-dimensional spaces. In the Cartesian plane, two vectors are said to be perpendicular if the angle between them is 90° (i.e. if they form a right angle
Right angle
In geometry and trigonometry, a right angle is an angle that bisects the angle formed by two halves of a straight line. More precisely, if a ray is placed so that its endpoint is on a line and the adjacent angles are equal, then they are right angles...

). This definition can be formalized in Cartesian space by defining the dot product
Dot product
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number obtained by multiplying corresponding entries and then summing those products...

 and specifying that two vectors in the plane are orthogonal if their dot product is zero.

Similarly, the construction of the norm
Norm (mathematics)
In linear algebra, functional analysis and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to all vectors in a vector space, other than the zero vector...

 of a vector is motivated by a desire to extend the intuitive notion of the length of a vector to higher-dimensional spaces. In Cartesian space, the norm of a vector is the square root of the vector dotted with itself. That is,

Many important results in linear algebra
Linear algebra
Linear algebra is a branch of mathematics that studies vector spaces, also called linear spaces, along with linear functions that input one vector and output another. Such functions are called linear maps and can be represented by matrices if a basis is given. Thus matrix theory is often...

 deal with collections of two or more orthogonal vectors. But often, it is easier to deal with vectors of unit length. That is, it often simplifies things to only consider vectors whose norm equals 1. The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to be given a special name. Two vectors which are orthogonal and of length 1 are said to be orthonormal.

Simple example

What does a pair of orthonormal vectors in 2-D Euclidean space look like?

Let u = (x1, y1) and v = (x2, y2).
Consider the restrictions on x1, x2, y1, y2 required to make u and v form an orthonormal pair.
  • From the orthogonality restriction, uv = 0.
  • From the unit length restriction on u, ||u|| = 1.
  • From the unit length restriction on v, ||v|| = 1.


Expanding these terms gives 3 equations:


Converting from Cartesian to polar coordinates, and considering Equation and Equation immediately gives the result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unit circle
Unit circle
In mathematics, a unit circle is a circle with a radius of one. Frequently, especially in trigonometry, "the" unit circle is the circle of radius one centered at the origin in the Cartesian coordinate system in the Euclidean plane...

.

After substitution, Equation becomes . Rearranging gives . Using a trigonometric identity to convert the cotangent term gives

It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals 90°.

Definition

Let be an inner-product space. A set of vectors
is called orthonormal if and only if
If and only if
In logic and related fields such as mathematics and philosophy, if and only if is a biconditional logical connective between statements....


where is the Kronecker delta and is the inner product defined over .

Significance

Orthonormal sets are not especially significant on their own. However, they display certain features that make them fundamental in exploring the notion of diagonalizability
Diagonalizable matrix
In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix...

 of certain operators on vector spaces.

Properties

Orthonormal sets have certain very appealing properties, which make them particularly easy to work with.
  • Theorem. If {e1, e2,...,en} is an orthonormal list of vectors, then

  • Theorem. Every orthonormal list of vectors is linearly independent.

Existence

  • Gram-Schmidt theorem. If {v1, v2,...,vn} is a linearly independent list of vectors in an inner-product space , then there exists an orthonormal list {e1, e2,...,en} of vectors in such that span(e1, e2,...,en) = span(v1, v2,...,vn).


Proof of the Gram-Schmidt theorem is constructive
Constructive proof
In mathematics, a constructive proof is a method of proof that demonstrates the existence of a mathematical object with certain properties by creating or providing a method for creating such an object...

, and discussed at length elsewhere. The Gram-Schmidt theorem, together with the axiom of choice, guarantees that every vector space admits an orthonormal basis. This is possibly the most significant use of orthonormality, as this fact permits operators on inner-product spaces to be discussed in terms of their action on the space's orthonormal basis vectors. What results is a deep relationship between the diagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterized by the Spectral Theorem
Spectral theorem
In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized...

.

Standard basis

The standard basis
Standard basis
In mathematics, the standard basis for a Euclidean space consists of one unit vector pointing in the direction of each axis of the Cartesian coordinate system...

 for the coordinate space
Coordinate space
In mathematics, specifically in linear algebra, the coordinate space, Fn, is the prototypical example of an n-dimensional vector space over a field F. It can be defined as the product space of F over a finite index set.-Definition:...

 Fn is
{e1, e2,...,en}   where    e1 = (1, 0, ... , 0)
   e2 = (0, 1, ... , 0)
   en = (0, 0, ..., 1)


Any two vectors ei, ej where i≠j are orthogonal, and all vectors are clearly of unit length. So {e1, e2,...,en} forms an orthonormal basis.

Real-valued functions

When referring to real
Real number
In mathematics, a real number is a value that represents a quantity along a continuum, such as -5 , 4/3 , 8.6 , √2 and π...

-valued function
Function (mathematics)
In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can...

s, usually the
Lp space
In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces...

 inner product is assumed unless otherwise stated. Two functions and are orthonormal over the interval
Interval (mathematics)
In mathematics, a interval is a set of real numbers with the property that any number that lies between two numbers in the set is also included in the set. For example, the set of all numbers satisfying is an interval which contains and , as well as all numbers between them...

  if

Fourier series

The Fourier series
Fourier series
In mathematics, a Fourier series decomposes periodic functions or periodic signals into the sum of a set of simple oscillating functions, namely sines and cosines...

 is a method of expressing a periodic function in terms of sinusoidal basis
Schauder basis
In mathematics, a Schauder basis or countable basis is similar to the usual basis of a vector space; the difference is that Hamel bases use linear combinations that are finite sums, while for Schauder bases they may be infinite sums...

 functions.
Taking C[-π,π] to be the space of all real-valued functions continuous on the interval [-π,π] and taking the inner product to be
It can be shown that
forms an orthonormal set.

However, this is of little consequence, because C[-π,π] is infinite-dimensional, and a finite set of vectors cannot span it. But, removing the restriction that n be finite makes the set dense in C[-π,π] and therefore an orthonormal basis of C[-π,π].
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK