📓 CME 302

        • Drawing 2023-10-04 2.excalidraw
        • Drawing 2023-10-04 12.13.49.excalidraw
        • Drawing 2023-10-04 12.26.03.excalidraw
        • Drawing 2023-10-15 18.35.05.excalidraw
        • Drawing 2023-10-15 18.36.19.excalidraw
        • Drawing 2023-10-18 11.37.23.excalidraw
        • GMRES least-squares problem 2023-11-29 10.49.27.excalidraw
        • Gram-Schmidt 2023-10-15 17.54.00.excalidraw
        • Least-squares solution using SVD 2023-10-15 20.28.15.excalidraw
        • MINRES 2023-12-04 08.40.09.excalidraw
        • QR iteration with shift 2023-10-30 11.49.00.excalidraw
        • QR using Givens transformations 2023-10-18 11.41.30.excalidraw
        • QR using Householder transformations 2023-10-18 11.37.36.excalidraw
      • 2024 complete and painless Conjugate Gradient
      • Accelerating convergence using a shift
      • Algorithm for QR iteration with shift
      • Algorithm for the Arnoldi process
      • Algorithm for the Lanczos process
      • All the orthogonality relations in CG
      • Angle between subspaces
      • Applying a Householder transformation
      • Arnoldi process
      • Backward error analysis for LU
      • Bootcamp
      • Brief introduction to Conjugate Gradients
      • Cauchy-Schwarz
      • CG search directions
      • Chebyshev iteration
      • Cholesky factorization
      • Classical iterative methods to solve sparse linear systems
      • Computational cost of Arnoldi and Lanczos
      • Computationally efficient search directions
      • Computing eigenvalues
      • Computing eigenvectors using the Schur decomposition
      • Computing multiple eigenvalues
      • Conditioning of a linear system
      • Conjugate Gradients algorithm
      • Conjugate Gradients code
      • Conjugate Gradients Version 1
      • Connection between Arnoldi and polynomials of A
      • Connection between Krylov subspace and Arnoldi
      • Convergence of classical iterative methods
      • Convergence of GMRES
      • Convergence of Lanczos eigenvalues for symmetric matrices
      • Convergence of Lanczos inner eigenvalues
      • Convergence of the Conjugate Gradients
      • Convergence of the orthogonal iteration
      • Convergence with conjugate steps
      • Deflation in the QR iteration
      • Determinant
      • Diagonalizable matrices
      • Dot product
      • Eigenvalues
      • Eigenvalues cannot be computed exactly
      • Exact shift
      • Existence of LU
      • Existence of the Cholesky factorization
      • Flexible preconditioned conjugate gradient method
      • Floating point arithmetic and unit roundoff error
      • Floating point arithmetic is different from regular arithmetic
      • Floating point numbers
      • Forward and backward error
      • Gauss-Seidel iteration
      • Ghost eigenvalues in the Lanczos process
      • GMRES
      • GMRES algorithm
      • GMRES least-squares problem
      • Gram-Schmidt
      • Hermitian and symmetric matrices
      • Householder transformation
      • Invertible matrix
      • Iterative methods for eigenvalue computation
      • Jacobi iteration
      • Key idea of iterative methods for eigenvalue computation
      • Krylov iterative methods to solve sparse linear systems
      • Krylov methods for sparse systems
      • Krylov subspace
      • Lanczos process
      • Least-squares problems
      • Least-squares solution using QR
      • Least-squares solution using SVD
      • LU algorithm
      • LU and determinant
      • Matrix block operations
      • Matrix-vector and matrix-matrix product
      • Method of normal equation
      • Method of power iteration
      • MINRES
      • Motivation of iterative methods for eigenvalue computation
      • Operator and matrix norms
      • Optimal step size
      • Orthogonal iteration
      • Orthogonal iteration algorithm
      • Orthogonal matrix and projector
      • Outer form of matrix-matrix product
      • Preconditioned Conjugate Gradients algorithm
      • Preconditioned Conjugate Gradients code
      • Preconditioning
      • Preconditioning the Conjugate Gradients algorithm
      • Projection
      • Pythagorean theorem
      • QR factorization
      • QR factorization and least-squares
      • QR iteration
      • QR iteration 2x2 example
      • QR iteration for upper Hessenberg matrices
      • QR iteration with shift
      • QR using Givens transformations
      • QR using Householder transformations
      • Rajat's painless Conjugate Gradients
      • Residuals and solution increments in CG
      • Row pivoting
      • Schur decomposition
      • Sensitivity analysis
      • Sherman-Morrison-Woodbury formula
      • Singular value decomposition
      • Solving linear systems
      • Solving linear systems using LU
      • Solving triangular systems
      • Some orthogonality relations in CG
      • SOR iteration
      • SOR iteration as a splitting method
      • Space and time costs of CG and GMRES
      • Splitting methods
      • Stability of the Cholesky factorization
      • Stability of the LU factorization
      • Subspace and linear independence
      • Summary of convergence and cost of the QR iteration
      • Summary of least-squares solution methods
      • Symmetric and unsymmetric QR iteration
      • Symmetric Positive Definite Matrices
      • The four fundamental spaces
      • Three-term recurrence
      • Trace
      • Triangular factorization
      • Uniqueness of the QR factorization
      • Unitarily diagonalizable matrices
      • Upper Hessenberg form for the QR iteration
      • Vector norms
      • Vectors and matrices
      • Why eigenvalues
    Home

    ❯

    Computing eigenvalues

    Computing eigenvalues

    Dec 05, 20244 min read

    • Why eigenvalues?
      • We give some examples of operations that become very simple to compute when we use eigenvalues.
    • Eigenvalues cannot be computed exactly
      • We show that computing eigenvalues is difficult and cannot be done exactly.
      • However, we do have approximate but very fast and accurate algorithms that will give us excellent approximations, with errors down to the unit roundoff error u.
    • Method of power iteration
      • Forms the basis of advanced methods: orthogonal and QR iterations.
      • Simple but only works marginally well.
      • Will be accelerated and improved to become the final QR iteration.
      • The sequence of algorithms to understand is: method of power iteration → orthogonal iteration → QR iteration.
    • Computing multiple eigenvalues
      • The problem with the previous approach is that it allows computing the largest eigenvalue only.
      • We propose an initial idea for an algorithm to compute all the eigenvalues.
      • This is not a practical algorithm and will be replaced by the orthogonal iteration.
      • The idea from this section is critical to understand the rest of this chapter. Make sure you understand all the steps and the key ideas.
    • Angle between subspaces
      • This concept will be needed when discussing the orthogonal iteration process.
    • Orthogonal iteration
      • This algorithm can be used to compute all the eigenvalues of the matrix.
      • It extends the power iteration.
      • This algorithm will be further refined into the QR iteration algorithm to reduce the computational cost.
      • This algorithm is key to understanding how eigenvalues can be computed. Make sure you fully master it.
    • Orthogonal iteration algorithm
      • Pseudo-algorithm for the orthogonal iteration process.
    • Computing eigenvectors using the Schur decomposition
      • The previous algorithms lead to the Schur decomposition.
      • The eigenvalues are on the diagonal of T.
      • How can compute the eigenvectors from the Schur decomposition?
    • Convergence of the orthogonal iteration
      • Understanding convergence is key to accelerating the algorithm.
      • Generally speaking, the rate of convergence is related to ratios of the form ∣λi+1​∣/∣λi​∣.
    • Accelerating convergence using a shift
      • Shifting with A−λI is key to accelerating convergence.
      • We briefly explain the idea here and will come back to it later.
      • From now on, the focus will be on computing Tk​ rather than Qk​.
    • QR iteration
      • This iteration is in fact similar to the orthogonal iteration. The sequence of matrices is the same.
      • However, the key difference is that the QR iteration works with Tk​ directly. As a result, it allows us to very easily perform a shift A−λI.
      • QR iteration is the foundation for eigenvalue computation algorithms.
    • Upper Hessenberg form for the QR iteration
      • We can accelerate the QR iteration by using two tricks: upper Hessenberg form, and applying a shift.
      • A can be turned into a matrix in upper Hessenberg form using QTAQ=H where Q is orthogonal.
      • Performing a QR iteration with H is much faster than with A.
    • QR iteration for upper Hessenberg matrices
      • Once the matrix is in upper Hessenberg form, the cost of the QR iteration is reduced from O(n3) to O(n2).
      • This is a huge reduction in computational cost.
      • This makes the QR iteration much more tractable for large matrices.
    • Symmetric and unsymmetric QR iteration
      • We compare the computational cost of the QR iteration for symmetric and unsymmetric matrices.
      • The cost of the upper Hessenberg form is the same: O(n3).
      • However, once the matrix is in upper Hessenberg form, the cost per iteration is O(n2) for unsymmetric matrices, and O(n) only for symmetric matrices.
      • See QR iteration animations and movies.
    • Deflation in the QR iteration
      • The QR iteration does not immediately turn A into an upper triangular matrix.
      • Instead, we transform A to an upper block triangular matrix. Then we can apply the QR iteration again to the smaller blocks.
      • This is how all the eigenvalues can be obtained.
    • QR iteration with shift
      • We explain how the idea of shifting can be used to accelerate the convergence of QR.
      • This allows reducing A very quickly to an upper block triangular form. At this point, deflation can be used to reduce the size of the matrix. The QR iteration process is then repeated with a matrix of size (n−1)×(n−1).
    • Algorithm for QR iteration with shift
      • This is the practical implementation of the QR iteration with shift.
    • Exact shift
      • We investigate what happens if we use an exact shift, that is, we shift using an exact eigenvalue of A.
      • In that case, as can be expected, matrix A becomes exactly block upper triangular, and we recover the exact eigenvalue.
    • QR iteration 2x2 example
      • This example illustrates the QR iteration with shift.
      • We derive a result for the convergence of unsymmetric and symmetric matrices.
    • Summary of convergence and cost of the QR iteration
      • We summarize the key results from this section for computational cost and convergence for symmetric and unsymmetric matrices.

    Graph View

    Backlinks

    • Eigenvalues cannot be computed exactly
    • Motivation of iterative methods for eigenvalue computation
    • CME 302 Numerical Linear Algebra

    Created with Quartz v4.3.1 © 2024

    • GitHub
    • Canvas