The simplest method is the Jacobi iteration.
- We can check that it follows the general framework of splitting methods.
- Very easy to implement
- Fast iteration
Code
Convergence of Jacobi
If is strictly row diagonally dominant
then Jacobi converges. We have a similar result if is strictly column diagonally dominant.
- Jacobi is very easy to implement on parallel computers because is diagonal (or block diagonal).
- Solution step is parallel, i.e., entries of can be computed concurrently using independent computing cores:
Proof
The iteration matrix in Jacobi is
Assume that is row-diagonally dominant. Then from the definition of the operator norm, we find that
This proves the convergence of Jacobi since
Assume now that is column-diagonally dominant. Then:
But
So
and .
There is also a proof that uses the Gershgorin Circle Theorem.
Gershgorin Circle Theorem. For any complex matrix , each eigenvalue of lies within at least one of the Gershgorin discs in the complex plane. The -th Gershgorin disc is centered at with radius . Mathematically, this is expressed as:
where represents an eigenvalue of .
Convergence proof using Gershgorinβs Theorem. is the iteration matrix in Jacobi. For the Jacobi method to converge, the spectral radius must be less than 1.
Applying Gershgorinβs Theorem. The diagonal elements of are zero, and the off-diagonal elements are . The sum of the absolute values of the off-diagonal elements in the -th row is:
If is strictly diagonally dominant, then:
This implies:
Therefore, each Gershgorin disc for is centered at zero with a radius less than 1. Consequently, all eigenvalues of lie within the unit circle.
Since all eigenvalues of have magnitudes less than 1, the spectral radius . This ensures that the Jacobi iteration converges for any initial guess .
The same proof applies for column diagonally dominant matrices.