digplanet beta 1: Athena
Share digplanet:

Agriculture

Applied sciences

Arts

Belief

Business

Chronology

Culture

Education

Environment

Geography

Health

History

Humanities

Language

Law

Life

Mathematics

Nature

People

Politics

Science

Society

Technology

In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.

The spectral theorem also provides a canonical decomposition, called the spectral decomposition, eigenvalue decomposition, or eigendecomposition, of the underlying vector space on which the operator acts.

Augustin Louis Cauchy proved the spectral theorem for self-adjoint matrices, i.e., that every real, symmetric matrix is diagonalizable. The spectral theorem as generalized by John von Neumann is today the most important result of operator theory. In addition, Cauchy was the first to be systematic about determinants.[1][2]

In this article we consider mainly the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.

Finite-dimensional case[edit]

Hermitian maps and Hermitian matrices[edit]

We begin by considering a Hermitian matrix on Cn or Rn. More generally we consider a Hermitian map A on a finite-dimensional real or complex inner product space V endowed with a positive definite Hermitian inner product. The Hermitian condition means

 (\forall x,y\in V): \langle A x ,\, y \rangle =  \langle x ,\, A y \rangle .

An equivalent condition is that A* = A where A* is the hermitian conjugate of A. In the case that A is identified with an Hermitian matrix, the matrix of A* can be identified with its conjugate transpose. If A is a real matrix, this is equivalent to AT = A (that is, A is a symmetric matrix).

This condition easily implies that all eigenvalues of a Hermitian map are real: it is enough to apply it to the case when x = y is an eigenvector. (Recall that an eigenvector of a linear map A is a (non-zero) vector x such that Ax = λx for some scalar λ. The value λ is the corresponding eigenvalue. Moreover, the eigenvalues are solutions to the characteristic polynomial.)

Theorem. There exists an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real.

We provide a sketch of a proof for the case where the underlying field of scalars is the complex numbers.

By the fundamental theorem of algebra, applied to the characteristic polynomial of A, there is at least one eigenvalue λ1 and eigenvector e1. Then since

\lambda_1 \langle e_1, e_1 \rangle = \langle A (e_1), e_1 \rangle = \langle e_1, A(e_1) \rangle = \bar\lambda_1 \langle e_1, e_1 \rangle

we find that λ1 is real. Now consider the space K = span{e1}, the orthogonal complement of e1. By Hermiticity, K is an invariant subspace of A. Applying the same argument to K shows that A has an eigenvector e2K. Finite induction then finishes the proof.

The spectral theorem holds also for symmetric maps on finite-dimensional real inner product spaces, but the existence of an eigenvector does not follow immediately from the fundamental theorem of algebra. The easiest way to prove it is probably to consider A as a Hermitian matrix and use the fact that all eigenvalues of a Hermitian matrix are real.

If one chooses the eigenvectors of A as an orthonormal basis, the matrix representation of A in this basis is diagonal. Equivalently, A can be written as a linear combination of pairwise orthogonal projections, called its spectral decomposition. Let

 V_\lambda = \{\,v \in V: A v = \lambda v\,\}

be the eigenspace corresponding to an eigenvalue λ. Note that the definition does not depend on any choice of specific eigenvectors. V is the orthogonal direct sum of the spaces Vλ where the index ranges over eigenvalues. Let Pλ be the orthogonal projection onto Vλ and λ1, ..., λm the eigenvalues of A, one can write its spectral decomposition thus:

A =\lambda_1 P_{\lambda_1} +\cdots+\lambda_m P_{\lambda_m}. \,

The spectral decomposition is a special case of both the Schur decomposition and the singular value decomposition.

Normal matrices[edit]

Main article: Normal matrix

The spectral theorem extends to a more general class of matrices. Let A be an operator on a finite-dimensional inner product space. A is said to be normal if A* A = A A*. One can show that A is normal if and only if it is unitarily diagonalizable: By the Schur decomposition, we have A = U T U*, where U is unitary and T upper-triangular. Since A is normal, T T* = T* T. Therefore T must be diagonal since normal upper triangular matrices are diagonal. The converse is obvious.

In other words, A is normal if and only if there exists a unitary matrix U such that

A=U D U^* \;

where D is a diagonal matrix. Then, the entries of the diagonal of D are the eigenvalues of A. The column vectors of U are the eigenvectors of A and they are orthonormal. Unlike the Hermitian case, the entries of D need not be real.

Compact self-adjoint operators[edit]

In Hilbert spaces in general, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case.

Theorem. Suppose A is a compact self-adjoint operator on a Hilbert space V. There is an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real.

As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. To prove this, we cannot rely on determinants to show existence of eigenvalues, but instead one can use a maximization argument analogous to the variational characterization of eigenvalues. The above spectral theorem holds for real or complex Hilbert spaces.

If the compactness assumption is removed, it is not true that every self adjoint operator has eigenvectors.

Bounded self-adjoint operators[edit]

The next generalization we consider is that of bounded self-adjoint operators on a Hilbert space. Such operators may have no eigenvalues: for instance let A be the operator of multiplication by t on L2[0, 1], that is

 [A \varphi](t) = t \varphi(t). \;

Theorem:[3] Let A be a bounded self-adjoint operator on a Hilbert space H. Then there is a measure space (X, Σ, μ) and a real-valued essentially bounded measurable function f on X and a unitary operator U:HL2μ(X) such that

 U^* T U = A \;

where T is the multiplication operator:

 [T \varphi](x) = f(x) \varphi(x). \;

and \|T\| = \|f\|_\infty

This is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure.

There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now f may be complex-valued.

An alternative formulation of the spectral theorem expresses the operator A as an integral of the coordinate function over the operator's spectrum with respect to a projection-valued measure.

 A = \int_{\sigma(A)} \lambda \, d E_{\lambda}

When the normal operator in question is compact, this version of the spectral theorem reduces to the finite-dimensional spectral theorem above, except that the operator is expressed as a linear combination of possibly infinitely many projections.

General self-adjoint operators[edit]

Many important linear operators which occur in analysis, such as differential operators, are unbounded. There is also a spectral theorem for self-adjoint operators that applies in these cases. To give an example, any constant coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed the unitary operator that implements this equivalence is the Fourier transform; the multiplication operator is a type of Fourier multiplier.

In general, spectral theorem for self-adjoint operators may take several equivalent forms.

Spectral theorem in the form of multiplication operator. For each self-adjoint operator T acting in a Hilbert space H, there exists a unitary operator, making an isometrically isomorphic mapping of the Hilbert space H onto the space L2(M, μ), where the operator T is represented as a multiplication operator.

The Hilbert space H where a self-adjoint operator T acts may be decomposed into a direct sum of Hilbert spaces Hi in such a way that the operator T, narrowed to each space Hi , has a simple spectrum. It is possible to construct unique such decomposition (up to unitary equivalence), which is called an ordered spectral representation.

See also[edit]

References[edit]

  1. ^ Cauchy and the spectral theory of matrices by Thomas Hawkins
  2. ^ A Short History of Operator Theory by Evans M. Harrell II
  3. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Springer, p. 147 

Original courtesy of Wikipedia: http://en.wikipedia.org/wiki/Spectral_theorem — Please support Wikipedia.
This page uses Creative Commons Licensed content from Wikipedia. A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia.
27389 videos foundNext > 

Spectral Theorem for Real Matrices: General nxn Case

Linear Algebra: We state and prove the Spectral Theorem for real vector spaces. That is, if A is a real nxn symmetric matrix, we show that A can be diagonali...

spectral theorem

Example of Diagonalizing a Symmetric Matrix (Spectral Theorem)

Linear Algebra: For the real symmetric matrix [3 2 / 2 3], 1) verify that all eigenvalues are real, 2) show that eigenvectors for distinct eigenvalues are or...

Spectral Theorem for Real Matrices: General 2x2 Case

Linear Algebra: We state and prove the Spectral Theorem for a real 2x2 symmetric matrix A = [a b \ b c]. That is, we show that the eigenvalues of A are real ...

Shifrin Math 3510 Day53: Spectral Theorem

Dr. Theodore Shifrin, professor at the University of Georgia, presents material from his textbook: Multivariable Mathematics: Linear Algebra, Multivariable C...

Mod-01 Lec-36 L36-Spectral Theorem

Elementary Numerical Analysis by Prof. Rekha P. Kulkarni , Department of Mathematics, IIT Bombay. For more details on NPTEL visit http://nptel.iitm.ac.in.

Finite dimensional C*-algebras by S. Sundar

We will start with the spectral theorem for normal operators on finite dimensional Hilbert spaces. Building on it, I will show that finite dimensional commut...

Example of Spectral Theorem (3x3 Symmetric Matrix)

Linear Algebra: We verify the Spectral Theorem for the 3x3 real symmetric matrix A = [ 0 1 1 / 1 0 1 / 1 1 0 ]. That is, we show that the eigenvalues of A ar...

Positive Semi-Definite Matrix 2: Spectral Theorem

Matrix Theory: Following Part 1, we note the recipe for constructing a (Hermitian) PSD matrix and provide a concrete example of the PSD square root. We prove...

Math Ninja: Spectral Theorem Preliminary 3

Since preliminary 1, 2, 4 and the spectral theorem are currrently MIA, I have to confiscate a webcam and a chalkboard somewhere to refilm the rest. Anyways, ...

27389 videos foundNext > 

2 news items

 
Washington Post
Mon, 17 Feb 2014 11:31:32 -0800

Formulae commonly rated as neutral included Euler's formula for polyhedral triangulation, the Gauss Bonnet theorem and a formulation of the Spectral theorem (Data Sheet 1: EquationsForm.pdf—Equations 3, 4, and 52). Low rated equations included ...
 
GameDev.net
Sun, 09 Jan 2011 11:18:09 -0800

P is a symmetric real matrix, so by the spectral theorem it is orthogonally diagonalizable. Its 0-eigenvectors span the 1-eigenspace of the original matrix Q. Finally QR-decomposition will tell you the range and null space of P (see wiki: Kernel (math)).
Loading

Oops, we seem to be having trouble contacting Twitter

Support Wikipedia

A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. Please add your support for Wikipedia!

Searchlight Group

Digplanet also receives support from Searchlight Group. Visit Searchlight