Table of Contents
If for some vector, the output of a linear transformation is just a scaled multiple of the same vector without any changes in the direction, we call such a vector an “invariant” direction or an eigenvector and the scalar the eigenvalue.
Since is a non-zero vector, we are looking for a solution to the homogeneous system of equation . Thus is a singular matrix, and consequently .
Invariant Subspace
If some vector is an eigenvector for some matrix such that , then all scalar multiples of are eigenvectors as well.
Because the space of all multiples of would pass through the origin if in , these eigenvectors form a subspace. Such a subspace is known as an invariant subspace.
The invariant subspace of a linear transformation is the set of vectors such that where is a scalar.
Eigenvalues
The scaling factor by which eigenvectors get scaled upon being transformed are called eigenvalues.
The polynomial equation obtained while doing is called the characteristic polynomial. The roots of this polynomial are the eigenvalues. Due to this, eigenvalues can be-
- Real and distinct
- Real but repeated
- Complex
Multiplicities of Eigenvalues
- Algebraic Multiplicity (AM )- The number of times an eigenvalue is repeated.
- Geometric Multiplicity (GM) - The number of linearly independent eigenvectors associated with a particular eigenvalue.
- Note - GM AM for each eigenvalue . If GM AM for any specific eigenvalue, we say that the corresponding eigenvalue is deficient.
Distinct eigenvalues
If the eigenvalues of a transformation are distinct, the eigenvectors are linearly independent.
Consider a linear transformation on which has and as its two non-zero eigenvectors and and as the two distinct eigenvalues corresponding to these vectors.
If and are linearly independent, for .
If we multiply by and subtract the result with , we get -
Because we know that and are distinct and that is a non-zero eigenvector, . Using this information we can show that .
Thus, if the eigenvalues are distinct, the eigenvectors are linearly independent.
Diagonalization
Let be a matrix in with a set of linearly independent vectors. We can make a matrix with these eigenvectors as its columns.
If we do then,
\begin{aligned} AP &= A[\begin{array}& u_1 & u_2 &\dots &u_n\end{array}] \\[8pt] &= [\begin{array} &Au_1 & Au_2 &\dots &Au_n\end{array}] \\[8pt] &= [\begin{array} &\alpha_1u_1 & \alpha_2u_2 &\dots &\alpha_nu_n\end{array}] \\[8pt] &= [\begin{array}& u_1 & u_2 &\dots &u_n\end{array}] \left[ \begin{matrix} \alpha_1 & 0 &\dots &0 \\ 0 &\alpha_2 &\dots &0 \\ \vdots&\vdots&\vdots&\vdots& \\ 0 &0 &\dots &\alpha_n \\ \end{matrix} \right] \\[8pt] &= PD \end{aligned}So we end up with
Because is made up of linearly independent eigenvectors belonging to , is a square and invertible matrix. Thus we can also say,
Any matrix in with linearly independent eigenvectors can be decomposed as a product of matrices formed using its eigenvalues and eigenvectors. This is called the diagonalization or eigen decomposition of the matrix.
If any eigenvalue of a matrix is deficient, it’ll mean that we don’t have enough eigenvectors needed to capture the whole dimensional space. Thus in such a case the matrix is un-diagonalizable.
Powers of a diagonalizable matrix
If is diagonalizable, it can be decomposed into a product of eigen matrices. These eigen matrices can be used to write the power of matrix .
This can be generalized to .
As is a diagonal matrix, calculating the powers of becomes trivial.
As ,
- If any eigenvalue then .
- If any then tends to .
- If any then .