- Why eigenvalues?
- We give some examples of operations that become very simple to compute when we use eigenvalues.
- Eigenvalues cannot be computed exactly
- We show that computing eigenvalues is difficult and cannot be done exactly.
- However, we do have approximate but very fast and accurate algorithms that will give us excellent approximations, with errors down to the unit roundoff error u.
- Method of power iteration
- Forms the basis of advanced methods: orthogonal and QR iterations.
- Simple but only works marginally well.
- Will be accelerated and improved to become the final QR iteration.
- The sequence of algorithms to understand is: method of power iteration → orthogonal iteration → QR iteration.
- Computing multiple eigenvalues
- The problem with the previous approach is that it allows computing the largest eigenvalue only.
- We propose an initial idea for an algorithm to compute all the eigenvalues.
- This is not a practical algorithm and will be replaced by the orthogonal iteration.
- The idea from this section is critical to understand the rest of this chapter. Make sure you understand all the steps and the key ideas.
- Angle between subspaces
- Orthogonal iteration
- This algorithm can be used to compute all the eigenvalues of the matrix.
- It extends the power iteration.
- This algorithm will be further refined into the QR iteration algorithm to reduce the computational cost.
- This algorithm is key to understanding how eigenvalues can be computed. Make sure you fully master it.
- Orthogonal iteration algorithm
- Computing eigenvectors using the Schur decomposition
- Convergence of the orthogonal iteration
- Understanding convergence is key to accelerating the algorithm.
- Generally speaking, the rate of convergence is related to ratios of the form ∣λi+1∣/∣λi∣.
- Accelerating convergence using a shift
- Shifting with A−λI is key to accelerating convergence.
- We briefly explain the idea here and will come back to it later.
- From now on, the focus will be on computing Tk rather than Qk.
- QR iteration
- This iteration is in fact similar to the orthogonal iteration. The sequence of matrices is the same.
- However, the key difference is that the QR iteration works with Tk directly. As a result, it allows us to very easily perform a shift A−λI.
- QR iteration is the foundation for eigenvalue computation algorithms.
- Upper Hessenberg form for the QR iteration
- We can accelerate the QR iteration by using two tricks: upper Hessenberg form, and applying a shift.
- A can be turned into a matrix in upper Hessenberg form using QTAQ=H where Q is orthogonal.
- Performing a QR iteration with H is much faster than with A.
- QR iteration for upper Hessenberg matrices
- Once the matrix is in upper Hessenberg form, the cost of the QR iteration is reduced from O(n3) to O(n2).
- This is a huge reduction in computational cost.
- This makes the QR iteration much more tractable for large matrices.
- Symmetric and unsymmetric QR iteration
- Deflation in the QR iteration
- The QR iteration does not immediately turn A into an upper triangular matrix.
- Instead, we transform A to an upper block triangular matrix. Then we can apply the QR iteration again to the smaller blocks.
- This is how all the eigenvalues can be obtained.
- QR iteration with shift
- We explain how the idea of shifting can be used to accelerate the convergence of QR.
- This allows reducing A very quickly to an upper block triangular form. At this point, deflation can be used to reduce the size of the matrix. The QR iteration process is then repeated with a matrix of size (n−1)×(n−1).
- Algorithm for QR iteration with shift
- Exact shift
- We investigate what happens if we use an exact shift, that is, we shift using an exact eigenvalue of A.
- In that case, as can be expected, matrix A becomes exactly block upper triangular, and we recover the exact eigenvalue.
- QR iteration 2x2 example
- This example illustrates the QR iteration with shift.
- We derive a result for the convergence of unsymmetric and symmetric matrices.
- Summary of convergence and cost of the QR iteration
- We summarize the key results from this section for computational cost and convergence for symmetric and unsymmetric matrices.