Next: Row Echelon Decomposition Up: Linear Algebra in Previous: Linear Algebra in

## Consequences of exact arithmetic

The first consequence is lack of rounding errors; the second consequence is expense. Exact arithmetic costs more than fixed-precision floating point, because

1. Fixed-precision floating point can be (and is) implemented in hardware, while exact arithmetic is implemented in software only.
2. Objects that can be of arbitrary size must carry around the size information, which must be checked on every operation.
3. Large objects take more time to handle, intrinsically.
4. There is a strong tendency for the size of numbers to grow exponentially during a computation, because when we multiply a d-digit number by an e-digit number, the result is a d+e-digit number. Hence a sequence of n operations on d-digit numbers will tend to produce -digit answers. This phenomenon is called expression swell. In the (often encountered) case where the final answer is itself small (say, 0), the swell goes away at the end, and we call the problem `intermediate expression swell'. But sometimes the answers themselves are large.
As an example of the last, we look at a sequence of matrix problems: we solve where is a random n-by-n matrix with 2-digit entries and . We define the procedure
go := proc(n) local A;
A := randmatrix(n,n);
linsolve(A, [1, seq(0, i=2..n)] );
map(length, ");
end:
What this does is to tell us the lengths of the entries in the answer. Using this procedure we find that the entries for a random n-th order problem (with 2 digit entries in A) has about 4n digits (actually not quite that many, perhaps 4n - 1 or 4n-3). Thus a 30 by 30 problem with 2 digit entries will have a solution with rational numbers in it where the numerator and denominator are both about 70 digits long, in each of 30 entries.

When we think about it, this seems about right. The size of the answers, from Cramer's rule, will be related to the determinants of the n-1 order matrices that arise; and the determinant of an n by n matrix of d digit numbers can be expected to have nd digits. This number appears in the denominator while the numerators can be expected to be a bit shorter, I think. In any event, the size of the entries grows very quickly.

This is not a problem with Maple. It is the exact answers that are getting large. You cannot blame Maple if the answers are big. The problem is exactness.

If we evaluate the numbers to floating-point, we find that the numbers themselves are of moderate size---the ratio of a 30 digit number to a 31 digit number will be between 0.1 and 1, after all.

What about arbitrary precision floating point arithmetic? This represents a compromise, because the size of the numbers cannot grow without bound---they are limited to 50 digits, or 100 digits, or whatever limit is set at first. This removes the exponential growth.

In Maple, floats are much slower than integers; this is because the kernel has been optimized for integer arithmetic. Other arbitrary-precision floating point packages (e.g. David Bailey's Fortran code) are faster. But Maple's arbitrary precision floats represent a good compromise between speed and ease of use.

As another example, consider finding the eigenvalues of the Hilbert matrix of order n, . This matrix appears in, for example, finite elements where the basis functions are monomials. It is well-known that the condition number for the solution of grows exponentially; but for the eigenvalue problem the Hilbert matrix is perfectly conditioned, being symmetric. Thus Matlab will compute the eigenvalues to full machine precision, practically for any n. However, it turns out that once the eigenvalues get smaller than the machine epsilon, Matlab no longer gets them right, but that is only to be expected. Maple, using Digits = 60, can get all the eigenvalues of correct to 30 places, including the small ones. This is with evalf(Eigenvals(...)), of course.

How about the characteristic polynomial? It turns out that the coefficients of (not ) have 120 decimal digits; the coefficients of have length 458. It took more than 1.5 hours on my DX4/100 notebook to compute this polynomial alone. I didn't try to compute the roots of that polynomial, because as a method of finding eigenvalues it already fails in comparison to the 13 seconds with evalf(Eigenvals(H25)).

Next: Row Echelon Decomposition Up: Linear Algebra in Previous: Linear Algebra in

Robert Corless
Wed Jan 31 11:33:59 EST 1996