Intuition:

  1. Stretching and shrinking: Imagine the matrix A as a transformation that stretches or shrinks space along different directions. The eigenvectors represent these special directions, and the eigenvalues represent the amount of stretching or shrinking.
  2. Dominant direction: The eigenvector corresponding to the largest eigenvalue represents the direction of maximum stretching.
  3. Repeated application: As we apply A repeatedly (), the stretching along the dominant direction becomes more and more pronounced compared to other directions.

Derivation:

The approach goes back to the fundamental motivation behind eigenvalue:

We can write this product using the outer form of matrix-matrix products:

Assume that

Then:

The range of is nearly

Pick a random vector with and

We have that simply converges to the span of .

Algorithm pseudo-code:

while not_converged:
    zk = A * qk
    ev = dot(zk, qk)
    qk = zk / norm(zk)
  • ev converges to the eigenvalue.
  • qk does not necessarily converge, but span(qk) converges to span(x1).

We can make a more precise statement. Denote by

for a general complex eigenvalue. is a unit eigenvector associated with .

Then, because of the step qk = zk / norm(zk), we get

So converges to .

Intuitive analogy.

  • Each eigenvector is scaled by its corresponding eigenvalue when multiplied by A.
  • The largest eigenvalue causes the fastest growth.
  • Over many iterations, the component along the dominant eigenvector outgrows all others.
  • We can think of it like repeatedly sieving sand through increasingly fine meshes. Eventually, only the largest particles (analogous to the dominant eigenvector) remain.

Eigenvalues, Eigenvalues cannot be computed exactly, Why eigenvalues