This is a better method compared to Jacobi but at the cost of additional complexity.

Again this method belongs to the class of splitting methods.

Algorithm

for k = 1:maxiter
    for i = 1:n
        sigma = 0.
        for j = 1:n
            if j != i
                sigma += A[i,j] * x[j]
            end
        end
        x[i] = (b[i] - sigma)/A[i,i]
    end
end

Gauss-Seidel uses the most recent estimate of x[i], whereas Jacobi uses the previous x to compute xnew.

Convergence of Gauss-Seidel

Theorem 1. If is strictly row diagonally dominant

then both Jacobi and Gauss-Seidel always converge. An analogous result holds if is strictly column diagonally dominant.

Theorem 2. If is a symmetric positive definite matrix, then Gauss-Seidel always converges.

  • Gauss-Seidel typically converges faster than Jacobi.
  • In many cases, the number of iterations with GS is about half that of Jacobi. See details in the textbook.
  • However, GS is less parallel than Jacobi because of the additional data dependency.

Proof of Theorem 1.

Before proving the theorem we prove that diagonally dominant matrices cannot be singular. Consider matrix a strictly row diagonally dominant matrix. Assume that is singular. Then there exists such that . This implies, with that

So has eigenvalue 1. But from diagonal dominance, we also have that

This is a contradiction. So cannot be singular.

We get a similar result for a strictly column diagonally dominant matrix. We show that has eigenvalue 1. But , which is a contradiction.

Let us now prove our result for Gauss-Seidel. The iteration matrix in Gauss-Seidel is

Consider an eigenvalue of . Then there exists such that

So is singular. If , then is (row or column) diagonally dominant. This is a contradiction. So

So Gauss-Seidel converges for (row or column) diagonally dominant matrices.