10.2 Self-Adjoint Operators and Hermitian Matrices
5 min read•july 30, 2024
Self-adjoint operators are linear operators equal to their adjoint, with real eigenvalues and orthogonal eigenspaces. They're crucial in and data analysis. Their properties make them ideal for representing physical observables and analyzing complex datasets.
Hermitian matrices are the matrix representation of self-adjoint operators in finite-dimensional spaces. They share similar properties, including real eigenvalues and . The allows for diagonalization, enabling efficient computation of matrix functions and applications in various fields.
Self-adjoint operators in inner product spaces
Definition and properties
Top images from around the web for Definition and properties
linear algebra - Eigenvalues of sum of Hermitian matrices - Mathematics Stack Exchange View original
Is this image relevant?
matrices - Eigenvalues and Eigenspaces - Mathematics Stack Exchange View original
Is this image relevant?
linear algebra - Eigenvalues of sum of Hermitian matrices - Mathematics Stack Exchange View original
Is this image relevant?
matrices - Eigenvalues and Eigenspaces - Mathematics Stack Exchange View original
Is this image relevant?
1 of 2
Top images from around the web for Definition and properties
linear algebra - Eigenvalues of sum of Hermitian matrices - Mathematics Stack Exchange View original
Is this image relevant?
matrices - Eigenvalues and Eigenspaces - Mathematics Stack Exchange View original
Is this image relevant?
linear algebra - Eigenvalues of sum of Hermitian matrices - Mathematics Stack Exchange View original
Is this image relevant?
matrices - Eigenvalues and Eigenspaces - Mathematics Stack Exchange View original
Is this image relevant?
1 of 2
A is a linear operator equal to its adjoint operator
For a linear operator T on an V, T is self-adjoint if ⟨Tx,y⟩=⟨x,Ty⟩ for all x,y∈V
Self-adjoint operators are bounded and have real eigenvalues
The eigenspaces corresponding to distinct eigenvalues are orthogonal
If T is a self-adjoint operator on a finite-dimensional inner product space V, there exists an orthonormal basis for V consisting of eigenvectors of T
Algebraic properties
The set of self-adjoint operators on an inner product space forms a real vector space under the usual addition and scalar multiplication of operators
The composition of two self-adjoint operators is self-adjoint if and only if the operators commute
For self-adjoint operators S and T, ST=TS is a necessary and sufficient condition for ST to be self-adjoint
The sum of two self-adjoint operators is always self-adjoint
If S and T are self-adjoint, then S+T is also self-adjoint
Scalar multiples of self-adjoint operators are self-adjoint
If T is self-adjoint and c∈R, then cT is also self-adjoint
Eigenvalues and eigenvectors of self-adjoint operators
Eigenvalue properties
Eigenvalues of a self-adjoint operator are always real
If λ is an eigenvalue of a self-adjoint operator T, then λ∈R
Eigenvectors corresponding to distinct eigenvalues of a self-adjoint operator are orthogonal
If v1 and v2 are eigenvectors of a self-adjoint operator T with distinct eigenvalues λ1 and λ2, then ⟨v1,v2⟩=0
The algebraic and geometric multiplicities of each eigenvalue of a self-adjoint operator are equal
For any eigenvalue λ of a self-adjoint operator T, the dimension of the eigenspace corresponding to λ equals the multiplicity of λ as a root of the characteristic polynomial of T
Spectral properties
A self-adjoint operator on a finite-dimensional inner product space has a complete set of orthonormal eigenvectors that form a basis for the space
This set of eigenvectors is called an orthonormal eigenbasis
Any vector in the inner product space can be expressed as a linear combination of the orthonormal eigenvectors
For a vector v in an inner product space V with orthonormal eigenbasis {u1,u2,…,un}, v=∑i=1n⟨v,ui⟩ui
The eigenvalues of a self-adjoint operator can be used to calculate the operator's trace and determinant
For a self-adjoint operator T with eigenvalues λ1,λ2,…,λn, tr(T)=∑i=1nλi and det(T)=∏i=1nλi
Self-adjoint operators vs Hermitian matrices
Hermitian matrices
A matrix A is Hermitian if A=A∗, where A∗ denotes the conjugate transpose of A
Hermitian matrices are the matrix representation of self-adjoint operators on finite-dimensional inner product spaces
The eigenvalues of a are always real, and the eigenvectors corresponding to distinct eigenvalues are orthogonal
Every Hermitian matrix is unitarily diagonalizable
There exists a unitary matrix U such that U∗AU is a diagonal matrix with the eigenvalues of A on the diagonal
Algebraic properties
The set of Hermitian matrices forms a real vector space under the usual matrix addition and scalar multiplication
The product of two Hermitian matrices is Hermitian if and only if the matrices commute
For Hermitian matrices A and B, AB=BA is a necessary and sufficient condition for AB to be Hermitian
The sum of two Hermitian matrices is always Hermitian
If A and B are Hermitian, then A+B is also Hermitian
Scalar multiples of Hermitian matrices are Hermitian
If A is Hermitian and c∈R, then cA is also Hermitian
Spectral theorem for self-adjoint operators
Diagonalization of Hermitian matrices
The spectral theorem states that if T is a self-adjoint operator on a finite-dimensional inner product space V, then there exists an orthonormal basis for V consisting of eigenvectors of T, and T can be represented as a diagonal matrix with respect to this basis
To diagonalize a Hermitian matrix A, find an orthonormal basis of eigenvectors {v1,v2,…,vn} and form a unitary matrix U with these eigenvectors as columns
Then, U∗AU=D, where D is a diagonal matrix with the eigenvalues of A on the diagonal
The spectral decomposition of a Hermitian matrix A is given by A=UDU∗, where U is a unitary matrix whose columns are eigenvectors of A, and D is a diagonal matrix with the eigenvalues of A on the diagonal
Applications of the spectral theorem
The spectral theorem allows for the computation of matrix functions of Hermitian matrices
If f is a function defined on the eigenvalues of a Hermitian matrix A, then f(A)=Uf(D)U∗, where f(D) is the diagonal matrix obtained by applying f to each diagonal entry of D
The spectral theorem is used in quantum mechanics to represent observables as self-adjoint operators and to calculate their expectation values and probabilities
The eigenvalues of the observable correspond to the possible measurement outcomes, and the eigenvectors represent the states in which the system is found after the measurement
The spectral theorem is also applied in signal processing and data analysis to perform principal component analysis (PCA) and singular value decomposition (SVD)
These techniques help in dimensionality reduction, feature extraction, and noise reduction by identifying the most significant eigenvectors and eigenvalues of the data covariance matrix