The previous methods (Jacobi, Gauss-Seidel) lack any parameter that can be adjusted to accelerate convergence in difficult cases.

Let us revisit Gauss-Seidel:

Let us rewrite this as

Then SOR is defined as

for some parameter .

SOR: Successive Over-Relaxation

Gauss-Seidel update:

Here is the increment form of this update:

We can now define the SOR update:

SOR relaxation parameter

can be used for different purposes.

  • If the method is diverging, we can apply a smaller increment: .
  • : when possible, we can try to accelerate convergence by applying a larger increment.
  • This method is a little similar to the gradient descent optimization algorithm with momentum, if you are familiar with this approach.

Code for SOR

for k = 1:maxiter
    for i = 1:n
        sigma = 0.
        for j = 1:n
            if j != i
                sigma += A[i,j] * x[j]
            end
        end
        sigma = (b[i] - sigma) / A[i,i]
        x[i] += omega * (sigma - x[i])
    end
end

Ordering

As in Gauss-Seidel, the ordering matters in this algorithm. Here are 3 simple orderings that are commonly used:

  1. 1 to
  2. 1 to followed by to 1 in Symmetric SOR
  3. Or, a different randomized permutation at every iteration