📓 CME 302

        • Drawing 2023-10-04 2.excalidraw
        • Drawing 2023-10-04 12.13.49.excalidraw
        • Drawing 2023-10-04 12.26.03.excalidraw
        • Drawing 2023-10-15 18.35.05.excalidraw
        • Drawing 2023-10-15 18.36.19.excalidraw
        • Drawing 2023-10-18 11.37.23.excalidraw
        • GMRES least-squares problem 2023-11-29 10.49.27.excalidraw
        • Gram-Schmidt 2023-10-15 17.54.00.excalidraw
        • Least-squares solution using SVD 2023-10-15 20.28.15.excalidraw
        • MINRES 2023-12-04 08.40.09.excalidraw
        • QR iteration with shift 2023-10-30 11.49.00.excalidraw
        • QR using Givens transformations 2023-10-18 11.41.30.excalidraw
        • QR using Householder transformations 2023-10-18 11.37.36.excalidraw
      • 2024 complete and painless Conjugate Gradient
      • Accelerating convergence using a shift
      • Algorithm for QR iteration with shift
      • Algorithm for the Arnoldi process
      • Algorithm for the Lanczos process
      • All the orthogonality relations in CG
      • Angle between subspaces
      • Applying a Householder transformation
      • Arnoldi process
      • Backward error analysis for LU
      • Bootcamp
      • Brief introduction to Conjugate Gradients
      • Cauchy-Schwarz
      • CG search directions
      • Chebyshev iteration
      • Cholesky factorization
      • Classical iterative methods to solve sparse linear systems
      • Computational cost of Arnoldi and Lanczos
      • Computationally efficient search directions
      • Computing eigenvalues
      • Computing eigenvectors using the Schur decomposition
      • Computing multiple eigenvalues
      • Conditioning of a linear system
      • Conjugate Gradients algorithm
      • Conjugate Gradients code
      • Conjugate Gradients Version 1
      • Connection between Arnoldi and polynomials of A
      • Connection between Krylov subspace and Arnoldi
      • Convergence of classical iterative methods
      • Convergence of GMRES
      • Convergence of Lanczos eigenvalues for symmetric matrices
      • Convergence of Lanczos inner eigenvalues
      • Convergence of the Conjugate Gradients
      • Convergence of the orthogonal iteration
      • Convergence with conjugate steps
      • Deflation in the QR iteration
      • Determinant
      • Diagonalizable matrices
      • Dot product
      • Eigenvalues
      • Eigenvalues cannot be computed exactly
      • Exact shift
      • Existence of LU
      • Existence of the Cholesky factorization
      • Flexible preconditioned conjugate gradient method
      • Floating point arithmetic and unit roundoff error
      • Floating point arithmetic is different from regular arithmetic
      • Floating point numbers
      • Forward and backward error
      • Gauss-Seidel iteration
      • Ghost eigenvalues in the Lanczos process
      • GMRES
      • GMRES algorithm
      • GMRES least-squares problem
      • Gram-Schmidt
      • Hermitian and symmetric matrices
      • Householder transformation
      • Invertible matrix
      • Iterative methods for eigenvalue computation
      • Jacobi iteration
      • Key idea of iterative methods for eigenvalue computation
      • Krylov iterative methods to solve sparse linear systems
      • Krylov methods for sparse systems
      • Krylov subspace
      • Lanczos process
      • Least-squares problems
      • Least-squares solution using QR
      • Least-squares solution using SVD
      • LU algorithm
      • LU and determinant
      • Matrix block operations
      • Matrix-vector and matrix-matrix product
      • Method of normal equation
      • Method of power iteration
      • MINRES
      • Motivation of iterative methods for eigenvalue computation
      • Operator and matrix norms
      • Optimal step size
      • Orthogonal iteration
      • Orthogonal iteration algorithm
      • Orthogonal matrix and projector
      • Outer form of matrix-matrix product
      • Preconditioned Conjugate Gradients algorithm
      • Preconditioned Conjugate Gradients code
      • Preconditioning
      • Preconditioning the Conjugate Gradients algorithm
      • Projection
      • Pythagorean theorem
      • QR factorization
      • QR factorization and least-squares
      • QR iteration
      • QR iteration 2x2 example
      • QR iteration for upper Hessenberg matrices
      • QR iteration with shift
      • QR using Givens transformations
      • QR using Householder transformations
      • Rajat's painless Conjugate Gradients
      • Residuals and solution increments in CG
      • Row pivoting
      • Schur decomposition
      • Sensitivity analysis
      • Sherman-Morrison-Woodbury formula
      • Singular value decomposition
      • Solving linear systems
      • Solving linear systems using LU
      • Solving triangular systems
      • Some orthogonality relations in CG
      • SOR iteration
      • SOR iteration as a splitting method
      • Space and time costs of CG and GMRES
      • Splitting methods
      • Stability of the Cholesky factorization
      • Stability of the LU factorization
      • Subspace and linear independence
      • Summary of convergence and cost of the QR iteration
      • Summary of least-squares solution methods
      • Symmetric and unsymmetric QR iteration
      • Symmetric Positive Definite Matrices
      • The four fundamental spaces
      • Three-term recurrence
      • Trace
      • Triangular factorization
      • Uniqueness of the QR factorization
      • Unitarily diagonalizable matrices
      • Upper Hessenberg form for the QR iteration
      • Vector norms
      • Vectors and matrices
      • Why eigenvalues
    Home

    ❯

    Iterative methods for eigenvalue computation

    Iterative methods for eigenvalue computation

    Dec 05, 20242 min read

    • Motivation of iterative methods for eigenvalue computation
      • We briefly highlight why a different type of method is required for sparse matrices.
    • Key idea of iterative methods for eigenvalue computation
      • The key idea goes back to the previous section, where we learned how to transform a matrix to the upper Hessenberg form.
      • We can calculate the columns of H one by one by following a process similar to Gram-Schmidt.
    • Brief introduction to Conjugate Gradients
      • We use the approach from the previous section to find an approximate solution for sparse linear systems Ax=b.
    • Arnoldi process
      • We use the algorithm from the section above to approximate the eigenvalues of A.
      • Only simple vector operations and sparse matrix-vector products are required. So this approach is computationally very efficient for small k.
    • Algorithm for the Arnoldi process
      • We provide the details of the algorithm using pseudo-code.
      • See movies of convergence and Arnoldi convergence.key
      • Understanding convergence is not an easy task and is a complex topic.
    • Krylov subspace
      • This will form the foundation of all the advanced iterative methods in this class
    • Connection between Krylov subspace and Arnoldi
      • The Qk​ orthogonal basis in Arnoldi (see Arnoldi process) spans the sequence of nested Krylov subspaces K(A,q1​,k).
    • Connection between Arnoldi and polynomials of A
      • Using polynomials of A, we provide some arguments as to why the Arnoldi process typically provides a good approximation of some of the eigenvalues of A.
    • Lanczos process
      • We outline the Arnoldi process specialized for symmetric matrices. This is called the Lanczos process.
      • Several steps simplify and the computational cost is reduced.
    • Algorithm for the Lanczos process
      • Pseudo-code for the Lanczos process for symmetric matrices.
    • Computational cost of Arnoldi and Lanczos
      • As expected, Lanczos is computationally cheaper than Arnoldi.
    • Convergence of Lanczos eigenvalues for symmetric matrices
      • For symmetric matrices, we can prove relatively sharp bounds on the error using Lanczos.
      • They involve Chebyshev polynomials and the Krylov subspace.
      • Results for unsymmetric matrices are much more difficult to derive.
    • Convergence of Lanczos inner eigenvalues
      • We can prove theoretical results for the convergence of inner eigenvalues.
      • Inner eigenvalues tend to converge slower than the extremal eigenvalues.
    • Ghost eigenvalues in the Lanczos process
      • Because of the short recurrence used in Lanczos, some eigenvalues may fail to converge properly because of roundoff errors.
      • These are called “ghost eigenvalues.”

    Graph View

    Backlinks

    • CME 302 Numerical Linear Algebra

    Created with Quartz v4.3.1 © 2024

    • GitHub
    • Canvas