next up previous
Next: Sparse systems Up: Linear Algebra in Previous: Solving linear systems

Solving eigenvalue problems

We earlier mentioned that solving by first finding the characteristic polynomial was a bad idea. In fact, Matlab turns this approach around and finds roots of polynomials by finding the eigenvalues of a companion matrix!

The most famous example of why polynomials are hard to find roots of is the so-called Wilkinson polynomial (which Wilkinson simply used as the first and supposedly simplest possible test case for a program he was writing for polynomial rootfinding). This polynomial is

in its expanded form. (That's important, because in the factored form it's easy to find the roots accurately!)

To make the point more fairly than it is usually presented, we scale the polynomial so its roots are on , namely

(there are 21 roots of this polynomial, so it is the scaled version of not , but this change is insignificant). The way this is usually presented, with roots running from 1 to 20, makes the sensitivity artificially dramatic. It's still dramatic, here, but the scale-sensitivity has been removed leaving only the irreducible poor conditioning of the roots.

Consider adding the perturbation to the polynomial q. Then the roots of will change from r to , ignoring higher-order effects. The quantities depend on the roots; let us look at the root . By implicit differentiation, we see that and when we evaluate it we find this quantity is more than : this means that any perturbation of the polynomial is amplified a millionfold in the resulting perturbation of the roots.

When we try this, using fsolve to find the roots of , we find that a large number of them have gone complex, and the magnitude of the roots is (not quite) a million times larger than the perturbation.

q := product( (x-i/10), i=-10..10);
p := q + x/10^6;
fsolve(p, x, complex);
rts := [ seq( [Re("[i]), Im("[i])], i=1..21) ];
plot(rts, style=POINT);

In contrast, suppose we take a random orthogonal matrix Q and form . Then this is representative of most matrices having these eigenvalues (we could take Q to be Hermitian, and this would find the rest). Now suppose we perturb this matrix by . What happens to the eigenvalues?

Standard perturbation theory (see e.g. Golub and Van Loan, or Hager) tells us that (because the matrix Q is orthogonal) the change in the eigenvalues is (to first order) bounded by ---that is, the eigenvalue problem is perfectly conditioned. This is true of any symmetric matrix, by the way. In fact, any reasonably decent eigenvalue program can get the eigenvalues of this problem to nearly full accuracy, very easily. Try this in Matlab, to see.

The moral is that it is easier and better to solve eigenvalue problems directly---conversion to a polynomial is a bad idea in general.


next up previous
Next: Sparse systems Up: Linear Algebra in Previous: Solving linear systems



Robert Corless
Wed Jan 31 11:33:59 EST 1996