- Motivation of iterative methods for eigenvalue computation
- We briefly highlight why a different type of method is required for sparse matrices.
- Key idea of iterative methods for eigenvalue computation
- The key idea goes back to the previous section, where we learned how to transform a matrix to the upper Hessenberg form.
- We can calculate the columns of one by one by following a process similar to Gram-Schmidt.
- Brief introduction to Conjugate Gradients
- We use the approach from the previous section to find an approximate solution for sparse linear systems .
- Arnoldi process
- We use the algorithm from the section above to approximate the eigenvalues of .
- Only simple vector operations and sparse matrix-vector products are required. So this approach is computationally very efficient for small .
- Algorithm for the Arnoldi process
- We provide the details of the algorithm using pseudo-code.
- See movies of convergence and
Arnoldi convergence.key
- Understanding convergence is not an easy task and is a complex topic.
- Krylov subspace
- This will form the foundation of all the advanced iterative methods in this class
- Connection between Krylov subspace and Arnoldi
- The orthogonal basis in Arnoldi (see Arnoldi process) spans the sequence of nested Krylov subspaces
- Connection between Arnoldi and polynomials of A
- Using polynomials of , we provide some arguments as to why the Arnoldi process typically provides a good approximation of some of the eigenvalues of .
- Lanczos process
- We outline the Arnoldi process specialized for symmetric matrices. This is called the Lanczos process.
- Several steps simplify and the computational cost is reduced.
- Algorithm for the Lanczos process
- Pseudo-code for the Lanczos process for symmetric matrices.
- Computational cost of Arnoldi and Lanczos
- As expected, Lanczos is computationally cheaper than Arnoldi.
- Convergence of Lanczos eigenvalues for symmetric matrices
- For symmetric matrices, we can prove relatively sharp bounds on the error using Lanczos.
- They involve Chebyshev polynomials and the Krylov subspace.
- Results for unsymmetric matrices are much more difficult to derive.
- Convergence of Lanczos inner eigenvalues
- We can prove theoretical results for the convergence of inner eigenvalues.
- Inner eigenvalues tend to converge slower than the extremal eigenvalues.
- Ghost eigenvalues in the Lanczos process
- Because of the short recurrence used in Lanczos, some eigenvalues may fail to converge properly because of roundoff errors.
- These are called “ghost eigenvalues.”