A good method (when **A** is dense and data uncertainty is present or possible) for solving **Ax = b** is to factorize **A** into
**PA = LU** and then solve the two systems **Ly = Pb** and **Ux = y**
(write **Ax = b** so **PA x = Pb** so **LUx = Pb** and put **Ux = y**,
whence **Ly = Pb**). Each of these triangular systems is very easy to
solve, by forward elimination and back substitution, respectively.

Finding is almost * never* a good idea, because it is
more expensive than factorization (indeed you have to essentially factorize to find , in the usual case). Solving the two triangular systems is just as cheap (for a computer) as the matrix multiplication **B b**, where **B** is the inverse of **A**. If you want
to know itself, (there are statistical interpretations, for
example) then by all means, do that, but don't find merely
to solve **Ax = b**.

We will talk about solving overdetermined systems **Ax = b** when
there are more equations than unknowns, later.

The LAPACK routines for solving linear systems (there are lots)
all allow you to compute the * condition number* of the
system, as well (or at least a good approximation of it).
The fundamental inequality is that if **Ax = b** and , then

where is the * condition number* of the
matrix **A**. Note that it depends on the matrix **A** and not on
the method used to solve **Ax = b**. In terms of the singular value
decomposition, we have that is the ratio of the largest singular value to the
smallest; this is infinite if the matrix is singular, and is large if the
matrix is `nearly' singular. In fact, is the 2-norm
matrix distance to the nearest rank **k-1** matrix. See Golub and
Van Loan for details.

Wed Jan 31 11:33:59 EST 1996