# Eigendecomposition of a matrix

Eigendecomposition of a matrix (Redirected from Inverse eigenvalues theorem) Jump to navigation Jump to search In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Conteúdo 1 Fundamental theory of matrix eigenvectors and eigenvalues 2 Eigendecomposition of a matrix 2.1 Exemplo 2.2 Matrix inverse via eigendecomposition 2.2.1 Practical implications 3 cálculo funcional 3.1 Exemplos 4 Decomposition for special matrices 4.1 Matrizes normais 4.2 Real symmetric matrices 5 Useful facts 5.1 Useful facts regarding eigenvalues 5.2 Useful facts regarding eigenvectors 5.3 Useful facts regarding eigendecomposition 5.4 Useful facts regarding matrix inverse 6 Numerical computations 6.1 Numerical computation of eigenvalues 6.2 Numerical computation of eigenvectors 7 Additional topics 7.1 Generalized eigenspaces 7.2 Conjugate eigenvector 7.3 Generalized eigenvalue problem 8 Veja também 9 Notas 10 Referências 11 External links Fundamental theory of matrix eigenvectors and eigenvalues Main article: Eigenvalue, eigenvector and eigenspace A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form {estilo de exibição mathbf {UMA} mathbf {v} =lambda mathbf {v} } for some scalar λ. Then λ is called the eigenvalue corresponding to v. Geometrically speaking, the eigenvectors of A are the vectors that A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.

This yields an equation for the eigenvalues {displaystyle pleft(lambda certo)=det left(mathbf {UMA} -lambda mathbf {EU} certo)=0.} We call p(λ) the characteristic polynomial, and the equation, called the characteristic equation, is an Nth order polynomial equation in the unknown λ. This equation will have Nλ distinct solutions, Onde 1 ≤ Nλ ≤ N. The set of solutions, isso é, the eigenvalues, is called the spectrum of A.[1][2][3] If the field of scalars is algebraically closed, then we can factor p as {displaystyle pleft(lambda certo)= esquerda(lambda -lambda _{1}certo)^{n_{1}}deixei(lambda -lambda _{2}certo)^{n_{2}}cdots left(lambda -lambda _{N_{lambda }}certo)^{n_{N_{lambda }}}=0.} The integer ni is termed the algebraic multiplicity of eigenvalue λi. The algebraic multiplicities sum to N: {textstyle sum _{i=1}^{N_{lambda }}{n_{eu}}=N.} For each eigenvalue λi, we have a specific eigenvalue equation {estilo de exibição à esquerda(mathbf {UMA} -lambda _{eu}mathbf {EU} certo)mathbf {v} =0.} There will be 1 ≤ mi ≤ ni linearly independent solutions to each eigenvalue equation. The linear combinations of the mi solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalue λi. The integer mi is termed the geometric multiplicity of λi. It is important to keep in mind that the algebraic multiplicity ni and geometric multiplicity mi may or may not be equal, but we always have mi ≤ ni. The simplest case is of course when mi = ni = 1. The total number of linearly independent eigenvectors, Nv, can be calculated by summing the geometric multiplicities {soma de estilo de exibição _{i=1}^{N_{lambda }}{m_{eu}}=N_{mathbf {v} }.} The eigenvectors can be indexed by eigenvalues, using a double index, with vij being the jth eigenvector for the ith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index vk, with k = 1, 2, ..., Nv.

Eigendecomposition of a matrix Let A be a square n × n matrix with n linearly independent eigenvectors qi (where i = 1, ..., n). Then A can be factorized as {estilo de exibição mathbf {UMA} = mathbf {Q} mathbf {Lambda } mathbf {Q} ^{-1}} where Q is the square n × n matrix whose ith column is the eigenvector qi of A, and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, Λii = λi. Note that only diagonalizable matrices can be factorized in this way. Por exemplo, the defective matrix {estilo de exibição à esquerda[{começar{smallmatrix}1&1\0&1end{smallmatrix}}certo]} (which is a shear matrix) cannot be diagonalized.

The n eigenvectors qi are usually normalized, but they need not be. A non-normalized set of n eigenvectors, vi can also be used as the columns of Q. That can be understood by noting that the magnitude of the eigenvectors in Q gets canceled in the decomposition by the presence of Q−1. If one of the eigenvalues λi has more than one linearly independent eigenvectors (isso é, the geometric multiplicity of λi is greater than 1), then these eigenvectors for this eigenvalue λi can be chosen to be mutually orthogonal; No entanto, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that if A is a normal matrix, then by the spectral theorem, it's always possible to diagonalize A in an orthonormal basis {qi}.

The decomposition can be derived from the fundamental property of eigenvectors: {estilo de exibição {começar{alinhado}mathbf {UMA} mathbf {v} &=lambda mathbf {v} \mathbf {UMA} mathbf {Q} &=mathbf {Q} mathbf {Lambda } \mathbf {UMA} &=mathbf {Q} mathbf {Lambda } mathbf {Q} ^{-1}.fim{alinhado}}} The linearly independent eigenvectors qi with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products Ax, for x ∈ Cn, which is the same as the image (or range) of the corresponding matrix transformation, and also the column space of the matrix A. The number of linearly independent eigenvectors qi with nonzero eigenvalues is equal to the rank of the matrix A, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.

The linearly independent eigenvectors qi with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the null space (also known as the kernel) of the matrix transformation A.

Example The 2 × 2 real matrix A {estilo de exibição mathbf {UMA} ={começar{bmatriz}1&0\1&3\end{bmatriz}}} may be decomposed into a diagonal matrix through multiplication of a non-singular matrix B {estilo de exibição mathbf {B} ={começar{bmatriz}a&b\c&dend{bmatriz}}em matemática {R} ^{2vezes 2}.} Então {estilo de exibição {começar{bmatriz}a&b\c&dend{bmatriz}}^{-1}{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}a&b\c&dend{bmatriz}}={começar{bmatriz}x&0\0¥d{bmatriz}},} for some real diagonal matrix {estilo de exibição à esquerda[{começar{smallmatrix}x&0\0¥d{smallmatrix}}certo]} .

Multiplying both sides of the equation on the left by B: {estilo de exibição {começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}a&b\c&dend{bmatriz}}={começar{bmatriz}a&b\c&dend{bmatriz}}{começar{bmatriz}x&0\0¥d{bmatriz}}.} The above equation can be decomposed into two simultaneous equations: {estilo de exibição {começar{casos}{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}a\cend{bmatriz}}={começar{bmatriz}ax\cxend{bmatriz}}\{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}b\dend{bmatriz}}={começar{bmatriz}by\dyend{bmatriz}}fim{casos}}.} Factoring out the eigenvalues x and y: {estilo de exibição {começar{casos}{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}a\cend{bmatriz}}=x{começar{bmatriz}a\cend{bmatriz}}\{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}b\dend{bmatriz}}=y{começar{bmatriz}b\dend{bmatriz}}fim{casos}}} Letting {estilo de exibição mathbf {uma} ={começar{bmatriz}a\cend{bmatriz}},quad mathbf {b} ={começar{bmatriz}b\dend{bmatriz}},} this gives us two vector equations: {estilo de exibição {começar{casos}mathbf {UMA} mathbf {uma} =xmathbf {uma} \mathbf {UMA} mathbf {b} =ymathbf {b} fim{casos}}} And can be represented by a single vector equation involving two solutions as eigenvalues: {estilo de exibição mathbf {UMA} mathbf {você} =lambda mathbf {você} } where λ represents the two eigenvalues x and y, and u represents the vectors a and b.

Shifting λu to the left hand side and factoring u out {estilo de exibição (mathbf {UMA} -lambda mathbf {EU} )mathbf {você} = mathbf {0} } Since B is non-singular, it is essential that u is nonzero. Portanto, {exibi-lo(mathbf {UMA} -lambda mathbf {EU} )=0} Desta forma {estilo de exibição (1-lambda )(3-lambda )=0} giving us the solutions of the eigenvalues for the matrix A as λ = 1 or λ = 3, and the resulting diagonal matrix from the eigendecomposition of A is thus {estilo de exibição à esquerda[{começar{smallmatrix}1&0\0&3end{smallmatrix}}certo]} .

Putting the solutions back into the above simultaneous equations {estilo de exibição {começar{casos}{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}a\cend{bmatriz}}=1{começar{bmatriz}a\cend{bmatriz}}\{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}b\dend{bmatriz}}=3{começar{bmatriz}b\dend{bmatriz}}fim{casos}}} Solving the equations, temos {displaystyle a=-2cquad {texto{e}}quad b=0,qquad c,din mathbb {R} .} Thus the matrix B required for the eigendecomposition of A is {estilo de exibição mathbf {B} ={começar{bmatriz}-2c&0\c&dend{bmatriz}},qquad c,din mathbb {R} ,} isso é: {estilo de exibição {começar{bmatriz}-2c&0\c&dend{bmatriz}}^{-1}{começar{bmatriz}1&0\1&3end{bmatriz}}{começar{bmatriz}-2c&0\c&dend{bmatriz}}={começar{bmatriz}1&0\0&3end{bmatriz}},qquad c,din mathbb {R} } Matrix inverse via eigendecomposition Main article: Inverse matrix If a matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is invertible and its inverse is given by {estilo de exibição mathbf {UMA} ^{-1}= mathbf {Q} mathbf {Lambda } ^{-1}mathbf {Q} ^{-1}} Se {estilo de exibição mathbf {UMA} } is a symmetric matrix, desde {estilo de exibição mathbf {Q} } is formed from the eigenvectors of {estilo de exibição mathbf {UMA} } , {estilo de exibição mathbf {Q} } is guaranteed to be an orthogonal matrix, Portanto {estilo de exibição mathbf {Q} ^{-1}= mathbf {Q} ^{matemática {T} }} . Além disso, because Λ is a diagonal matrix, its inverse is easy to calculate: {estilo de exibição à esquerda[Lambda ^{-1}certo]_{ii}={fratura {1}{lambda _{eu}}}} Practical implications When eigendecomposition is used on a matrix of measured, real data, the inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "ruído" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.[4] Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it.

The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. No entanto, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.

The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.

The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).

If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the Laplacian of the sorted eigenvalues:[5] {displaystyle min left|nabla ^{2}lambda _{matemática {s} }certo|} where the eigenvalues are subscripted with an s to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.

Functional calculus The eigendecomposition allows for much easier computation of power series of matrices. If f (x) É dado por {estilo de exibição f(x)=a_{0}+uma_{1}x+a_{2}x^{2}+cdots } then we know that {estilo de exibição f!deixei(mathbf {UMA} certo)= mathbf {Q} ,f!deixei(mathbf {Lambda } certo)mathbf {Q} ^{-1}} Because Λ is a diagonal matrix, functions of Λ are very easy to calculate: {estilo de exibição à esquerda[abandonou(mathbf {Lambda } certo)certo]_{ii}=fleft(lambda _{eu}certo)} The off-diagonal elements of f (Λ) are zero; isso é, f (Λ) is also a diagonal matrix. Portanto, calculating f (UMA) reduces to just calculating the function on each of the eigenvalues.

A similar technique works more generally with the holomorphic functional calculus, usando {estilo de exibição mathbf {UMA} ^{-1}= mathbf {Q} mathbf {Lambda } ^{-1}mathbf {Q} ^{-1}} de cima. Once again, nós achamos isso {estilo de exibição à esquerda[abandonou(mathbf {Lambda } certo)certo]_{ii}=fleft(lambda _{eu}certo)} Exemplos {estilo de exibição {começar{alinhado}mathbf {UMA} ^{2}&=left(mathbf {Q} mathbf {Lambda } mathbf {Q} ^{-1}certo)deixei(mathbf {Q} mathbf {Lambda } mathbf {Q} ^{-1}certo)= mathbf {Q} mathbf {Lambda } deixei(mathbf {Q} ^{-1}mathbf {Q} certo)mathbf {Lambda } mathbf {Q} ^{-1}= mathbf {Q} mathbf {Lambda } ^{2}mathbf {Q} ^{-1}\mathbf {UMA} ^{n}&=mathbf {Q} mathbf {Lambda } ^{n}mathbf {Q} ^{-1}\exp mathbf {UMA} &=mathbf {Q} exp(mathbf {Lambda } )mathbf {Q} ^{-1}fim{alinhado}}} which are examples for the functions {estilo de exibição f(x)=x^{2},;f(x)=x^{n},;f(x)=exp {x}} . Além disso, {exp de estilo de exibição {mathbf {UMA} }} is the matrix exponential.

Decomposition for special matrices Main article: Spectral theorem This section needs expansion. Você pode ajudar expandindo-o. (Junho 2008) When A is normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Normal matrices A complex-valued square matrix A is normal (meaning A*A = AA*, where A* is the conjugate transpose) if and only if it can be decomposed as {estilo de exibição mathbf {UMA} = mathbf {você} mathbf {Lambda } mathbf {você} ^{*}} where U is a unitary matrix (meaning U* = U−1) and Λ = diag(λ1, ..., λn) is a diagonal matrix.[6] The columns u1, ..., un of U form an orthonormal basis and are eigenvectors of A with corresponding eigenvalues λ1, ..., λn.

If A is restricted to be a Hermitian matrix (A = A*), then Λ has only real valued entries. If A is restricted to a unitary matrix, then Λ takes all its values on the complex unit circle, isso é, |λi| = 1.

Real symmetric matrices As a special case, for every n × n real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. Thus a real symmetric matrix A can be decomposed as {estilo de exibição mathbf {UMA} = mathbf {Q} mathbf {Lambda } mathbf {Q} ^{matemática {T}}} where Q is an orthogonal matrix whose columns are (the above chosen, real and orthonormal) eigenvectors of A, and Λ is a diagonal matrix whose entries are the eigenvalues of A.[7] Useful facts Useful facts regarding eigenvalues The product of the eigenvalues is equal to the determinant of A {displaystyle det left(mathbf {UMA} certo)=prod_{i=1}^{N_{lambda }}{lambda _{eu}^{n_{eu}}}} Note that each eigenvalue is raised to the power ni, the algebraic multiplicity. The sum of the eigenvalues is equal to the trace of A {nome do operador de estilo de exibição {tr} deixei(mathbf {UMA} certo)=soma _{i=1}^{N_{lambda }}{{n_{eu}}lambda _{eu}}} Note that each eigenvalue is multiplied by ni, the algebraic multiplicity. If the eigenvalues of A are λi, and A is invertible, then the eigenvalues of A−1 are simply λ−1 i. If the eigenvalues of A are λi, then the eigenvalues of f (UMA) are simply f (λi), for any holomorphic function f. Useful facts regarding eigenvectors If A is Hermitian and full-rank, the basis of eigenvectors may be chosen to be mutually orthogonal. The eigenvalues are real. The eigenvectors of A−1 are the same as the eigenvectors of A. Eigenvectors are only defined up to a multiplicative constant. Aquilo é, if Av = λv then cv is also an eigenvector for any scalar c ≠ 0. Em particular, −v and eiθv (for any θ) are also eigenvectors. In the case of degenerate eigenvalues (an eigenvalue appearing more than once), the eigenvectors have an additional freedom of rotation, that is to say any linear (orthonormal) combination of eigenvectors sharing an eigenvalue (in the degenerate subspace), are themselves eigenvectors (in the subspace). Useful facts regarding eigendecomposition A can be eigendecomposed if and only if the number of linearly independent eigenvectors, Nv, equals the dimension of an eigenvector: Nv = N If the field of scalars is algebraically closed and if p(λ) has no repeated roots, isso é, E se {estilo de exibição N_{lambda }=N,} then A can be eigendecomposed. A declaração "A can be eigendecomposed" does not imply that A has an inverse. A declaração "A has an inverse" does not imply that A can be eigendecomposed. A counterexample is {estilo de exibição à esquerda[{começar{smallmatrix}1&1\0&1end{smallmatrix}}certo]} , which is an invertible defective matrix. Useful facts regarding matrix inverse A can be inverted if and only if all eigenvalues are nonzero: {lambda de estilo de exibição _{eu}neq 0quad forall ,eu} If λi ≠ 0 and Nv = N, the inverse is given by {estilo de exibição mathbf {UMA} ^{-1}= mathbf {Q} mathbf {Lambda } ^{-1}mathbf {Q} ^{-1}} Numerical computations Further information: eigenvalue algorithm Numerical computation of eigenvalues Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. No entanto, this is often impossible for larger matrices, in which case we must use a numerical method.

In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the Abel–Ruffini theorem implies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply using nth roots. Portanto, general algorithms to find eigenvectors and eigenvalues are iterative.

Iterative numerical algorithms for approximating roots of polynomials exist, such as Newton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that small round-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely ill-conditioned function of the coefficients.[8] A simple and accurate iterative method is the power method: a random vector v is chosen and a sequence of unit vectors is computed as {estilo de exibição {fratura {mathbf {UMA} mathbf {v} }{deixei|mathbf {UMA} mathbf {v} certo|}},{fratura {mathbf {UMA} ^{2}mathbf {v} }{deixei|mathbf {UMA} ^{2}mathbf {v} certo|}},{fratura {mathbf {UMA} ^{3}mathbf {v} }{deixei|mathbf {UMA} ^{3}mathbf {v} certo|}},ldots } This sequence will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided that v has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; por exemplo, Google uses it to calculate the page rank of documents in their search engine.[9] Também, the power method is the starting point for many more sophisticated algorithms. Por exemplo, by keeping not just the last vector in the sequence, but instead looking at the span of all the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis of Arnoldi iteration.[8] alternativamente, the important QR algorithm is also based on a subtle transformation of a power method.[8] Numerical computation of eigenvectors Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation {estilo de exibição à esquerda(mathbf {UMA} -lambda _{eu}mathbf {EU} certo)mathbf {v} _{eu,j}= mathbf {0} } using Gaussian elimination or any other method for solving matrix equations.

No entanto, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. In power iteration, por exemplo, the eigenvector is actually computed before the eigenvalue (which is typically computed by the Rayleigh quotient of the eigenvector).[8] In the QR algorithm for a Hermitian matrix (or any normal matrix), the orthonormal eigenvectors are obtained as a product of the Q matrices from the steps in the algorithm.[8] (For more general matrices, the QR algorithm yields the Schur decomposition first, from which the eigenvectors can be obtained by a backsubstitution procedure.[10]) For Hermitian matrices, the Divide-and-conquer eigenvalue algorithm is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.[8] Additional topics Generalized eigenspaces Recall that the geometric multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of λI − A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix (λI − A)k for any sufficiently large k. Aquilo é, it is the space of generalized eigenvectors (first sense), where a generalized eigenvector is any vector which eventually becomes 0 if λI − A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.

This usage should not be confused with the generalized eigenvalue problem described below.

Conjugate eigenvector A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is {estilo de exibição mathbf {UMA} mathbf {v} =lambda mathbf {v} ^{*}.} Por exemplo, in coherent electromagnetic scattering theory, the linear transformation A represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.

Generalized eigenvalue problem A generalized eigenvalue problem (second sense) is the problem of finding a (nonzero) vector v that obeys {estilo de exibição mathbf {UMA} mathbf {v} =lambda mathbf {B} mathbf {v} } where A and B are matrices. If v obeys this equation, with some λ, then we call v the generalized eigenvector of A and B (in the second sense), and λ is called the generalized eigenvalue of A and B (in the second sense) which corresponds to the generalized eigenvector v. The possible values of λ must obey the following equation {exibi-lo(mathbf {UMA} -lambda mathbf {B} )=0.} If n linearly independent vectors {v1, …, vn} can be found, such that for every i ∈ {1, …, n}, Avi = λiBvi, then we define the matrices P and D such that {estilo de exibição P={começar{bmatriz}|&&|\mathbf {v} _{1}&cdots &mathbf {v} _{n}\|&&|fim{bmatriz}}equivalente {começar{bmatriz}(mathbf {v} _{1})_{1}&cdots &(mathbf {v} _{n})_{1}\vdots &&vdots \(mathbf {v} _{1})_{n}&cdots &(mathbf {v} _{n})_{n}fim{bmatriz}}} {estilo de exibição (D)_{eu j}={começar{casos}lambda _{eu},&{texto{E se }}i=j\0,&{texto{por outro lado}}fim{casos}}} Then the following equality holds {estilo de exibição mathbf {UMA} = mathbf {B} mathbf {P} mathbf {D} mathbf {P} ^{-1}} And the proof is {estilo de exibição mathbf {UMA} mathbf {P} = mathbf {UMA} {começar{bmatriz}|&&|\mathbf {v} _{1}&cdots &mathbf {v} _{n}\|&&|fim{bmatriz}}={começar{bmatriz}|&&|\Amathbf {v} _{1}&cdots &Amathbf {v} _{n}\|&&|fim{bmatriz}}={começar{bmatriz}|&&|\lambda _{1}Bmathbf {v} _{1}&cdots &lambda _{n}Bmathbf {v} _{n}\|&&|fim{bmatriz}}={começar{bmatriz}|&&|\Bmathbf {v} _{1}&cdots &Bmathbf {v} _{n}\|&&|fim{bmatriz}}mathbf {D} = mathbf {B} mathbf {P} mathbf {D} } And since P is invertible, we multiply the equation from the right by its inverse, finishing the proof.

The set of matrices of the form A − λB, where λ is a complex number, is called a pencil; the term matrix pencil can also refer to the pair (UMA, B) of matrices.[11] If B is invertible, then the original problem can be written in the form {estilo de exibição mathbf {B} ^{-1}mathbf {UMA} mathbf {v} =lambda mathbf {v} } which is a standard eigenvalue problem. No entanto, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important if A and B are Hermitian matrices, since in this case B−1A is not generally Hermitian and important properties of the solution are no longer apparent.

If A and B are both symmetric or Hermitian, and B is also a positive-definite matrix, the eigenvalues λi are real and eigenvectors v1 and v2 with distinct eigenvalues are B-orthogonal (v1*Bv2 = 0).[12] Nesse caso, eigenvectors can be chosen so that the matrix P defined above satisfies {estilo de exibição mathbf {P} ^{*}mathbf {B} mathbf {P} = mathbf {EU} } ou {estilo de exibição mathbf {P} mathbf {P} ^{*}mathbf {B} = mathbf {EU} } , and there exists a basis of generalized eigenvectors (it is not a defective problem).[11] This case is sometimes called a Hermitian definite pencil or definite pencil.[11] See also Eigenvalue perturbation Frobenius covariant Householder transformation Jordan normal form List of matrices Matrix decomposition Singular value decomposition Sylvester's formula Notes ^ Golub & Van Loan (1996, p. 310) ^ Kreyszig (1972, p. 273) ^ Nering (1970, p. 270) ^ Hayde, UMA. F.; Twede, D. R. (2002). Shen, Sylvia S. (ed.). "Observations on relationship between eigenvalues, instrument noise and detection performance". Imaging Spectrometry VIII. Proceedings of SPIE. 4816: 355. Bibcode:2002SPIE.4816..355H. doi:10.1117/12.453777. ^ Twede, D. R.; Hayden, UMA. F. (2004). Shen, Sylvia S; Lewis, Paul E (ed.). "Refinement and generalization of the extension method of covariance matrix inversion by regularization". Imaging Spectrometry IX. Proceedings of SPIE. 5159: 299. Bibcode:2004SPIE.5159..299T. doi:10.1117/12.506993. ^ Horn & Johnson (1985), p. 133, Teorema 2.5.3 ^ Horn & Johnson (1985), p. 136, Corolário 2.5.11 ^ Saltar para: a b c d e f Trefethen, Lloyd N.; Bau, Davi (1997). Numerical Linear Algebra. SIAM. ISBN 978-0-89871-361-9. ^ Ipsen, Ilse, and Rebecca M. Wills, Analysis and Computation of Google's PageRank, 7th IMACS International Symposium on Iterative Methods in Scientific Computing, Fields Institute, Toronto, Canadá, 5–8 May 2005. ^ Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000). "seção 5.8.2". Numerical Mathematics. Springer. p. 15. ISBN 978-0-387-98959-4. ^ Saltar para: a b c Bai, Z.; Demmel, J.; Dongarra, J.; Ruhe, UMA.; Van Der Vorst, H., eds. (2000). "Generalized Hermitian Eigenvalue Problems". Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. Philadelphia: SIAM. ISBN 978-0-89871-471-5. ^ Parlett, Beresford N. (1998). The symmetric eigenvalue problem (Reprint. ed.). Philadelphia: Society for Industrial and Applied Mathematics. p. 345. doi:10.1137/1.9781611971163. ISBN 978-0-89871-402-9. References Franklin, Joel N. (1968). Matrix Theory. Publicações de Dover. ISBN 978-0-486-41179-8. Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 978-0-8018-5414-9 Buzina, Rogério A.; Johnson, Carlos R. (1985). Análise de Matriz. Cambridge University Press. ISBN 978-0-521-38632-6. Buzina, Rogério A.; Johnson, Carlos R. (1991). Topics in Matrix Analysis. Cambridge University Press. ISBN 978-0-521-46713-1. Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), Nova york: Wiley, ISBN 978-0-471-50728-4 Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2ª edição), Nova york: Wiley, LCCN 76091646 Strang, G. (1998). Introduction to Linear Algebra (3rd ed.). Wellesley-Cambridge Press. ISBN 978-0-9614088-5-5. External links Interactive program & tutorial of Spectral Decomposition. Categorias: Matrix theoryMatrix decompositions

Se você quiser conhecer outros artigos semelhantes a **Eigendecomposition of a matrix** você pode visitar a categoria **Matrix decompositions**.

Deixe uma resposta