Math 303: Section 19

Dr. Janssen

\[ \def\R{{\mathbb R}} \def\b{{\mathbf{b}}} \def\x{{\mathbf{x}}} \def\v{{\mathbf{v}}} \def\w{{\mathbf{w}}} \DeclareMathOperator{\null}{Nul} \]

Finding eigenvalues/eigenvectors is important! But the characteristic equation is often unwieldy and/or leads to approximations in its own right.

Let \(A\) be an arbitrary \(2\times 2\) matrix with two linearly independent eigenvectors \(\v_1,\v_2\) corresponding to eigenvalues \(\lambda_1,\lambda_2\); we assume \(|\lambda_1| > |\lambda_2|\)^{1}.

Since \(\v_1\) and \(\v_2\) are linearly independent, for any \(\x_0\in \R^2\) there exist \(a_1, a_2\in \R\) for which

\[ \x_0 = a_1 \v_1 + a_2 \v_2; \]

thus

\[ \x_k = A^k \x_0 = a_1 \lambda_1^k \v_1 + a_2 \lambda_2^k \v_2. \]

Divide both sides of \(\x_k = a_1 \lambda_1^k \v_1 + a_2 \lambda_2^k \v_2\) by \(\lambda_1^k\); what happens as \(k\to \infty\)?

Assuming \(a_1\ne 0\)

^{1}, why do the vectors \(\x_k\) approach a vector in the*direction*of \(\v_1\) or \(-\v_1\)?What does this tell us about the sequence \(\{\x_k\}\) as \(k\to\infty\)?

- Straightforward to implement!
- Find the (approximate) eigenvector without needing the associated eigenvalue
- Makes assumptions!
- \(A\) is diagonalizable
- \(A\) has a dominant eigenvalue

Let \(A\) be \(n\times n\), \(\lambda\) an eigenvalue, and \(\v\) the corresponding eigenvector.

Explain why \(\lambda = \frac{\lambda (\v\cdot \v)}{\v\cdot \v}\).

Use the previous result to explain why \(\lambda = \frac{(A\v)\cdot \v}{\v\cdot\v}\).

These quotients are called *Rayleigh quotients*.

- Select an arbitrary nonzero vector \(\x_0\) as an initial guess to a dominant eigenvector.
- Let \(\x_1 = A \x_0\), \(k = 1\)
- To avoid having the magnitudes of successive approximations become excessively large, scale this approximation by the entry \(\alpha_k\) in \(\x_k\) of largest absolute value. Then replace \(\x_k\) by \(\frac{1}{|\alpha_k|} \x_k\).
- Calculate the Rayleigh quotient \(r_k = \frac{(A\x_k)\cdot \x_k}{\x_k\cdot \x_k}\).
- Let \(\x_{k+1} = A \x_k\). Increase \(k\) by 1 and repeat steps 3 through 5.

If \(\x_k\) converges to a dominant eigenvector of \(A\), then \(r_k\) converges to the dominant eigenvalue of \(A\).