Letβs use a different norm compared to the previous derivation. This will only apply to matrices that are symmetric positive definite.
We will consider the -norm:
If is symmetric positive definite, then this is a norm.
If we look for a solution that minimizes that norm in the Krylov subspace we obtain a simple equation:
The solution minimizes
We can prove that the solution is
We recognize from Lanczos. This is the same matrix. We project using and solve the resulting system.
Proof. The function to minimize is
Using our previous reasoning for least-squares, we find that
The final equation is:
Because we are using the norm, we are able to get rid of and have instead.
This leads to:
Conjugate gradients: version 1
- Compute using Lanczos.
- Space cost: This is required because of the last step which requires saving .
- Time cost:
This process can be made much more efficient. In particular, the space cost can be reduced to . The time cost can be reduced as well but will remain
But it takes some insight to make it happen.