next up previous
Next: Eigenvalue problems Up: Summary of relevant Previous: Cramer's rule

Gaussian elimination as a matrix factorization

You should have learned the algorithm of Gaussian elimination. If you took a numerical analysis class, you should also have learned that it can be usefully viewed as a matrix factorization. (To be perfectly honest, that never penetrated my own mind until I started teaching numerical methods classes---it's one thing to watch it on the blackboard, another to program it, and still another to try to teach it.) This fact is worth knowing, so here is an example.

A := randmatrix(4,4);

In Gaussian elimination, we want to use the pivot 85 to eliminate everything below it. To do it as an elementary matrix operation, we create a 4 by 4 upper triangular elementary matrix (i.e. a matrix that has only one column different from the identity). We choose that column to be exactly the elements of the first column of A, divided by the pivot .

L1 := matrix(4,4,0):
for i to 4 do L1[i,1] := A[i,1]/A[1,1]; od:
for i to 4 do L1[i,i] := 1; od:

The inverse of this matrix is nearly the same matrix, with just the signs below the diagonals changed (this is true of all elementary matrices).


When we do Gaussian elimination we add times the first row of A to the second row; likewise times the first row of A to the third row; and also times the first row to the last row. Inspection of the matrix multiplication below shows that this is exactly what we are doing, in matrix multiplication rather than row operations. This is an important point (the key idea of the whole works) so you should work it through by hand until you are quite convinced that this multiplication really is doing all three row operations.

evalm(" &* A);

A1 := ":
Now we do the second column, in a similar way. The pivot is now .
L2 := matrix(4,4,0):
for i from 2 to 4 do L2[i,2] := A1[i,2]/A1[2,2]; od:
L2[1,1] := 1;

for i from 3 to 4 do L2[i,i] := 1; od:

Again the inverse just changes the signs of the elementary matrix, because the inverse row operation is just adding the negative of the 2nd row instead of adding the row.


A2 := evalm(" &* A1);

We are now ready for the last one.

L3 := matrix(4,4,0):
for i to 4 do L3[i,i] := 1; od:
L3[4,3] := A2[4,3]/A2[3,3];


evalm( L3^(-1) &* A2);

Note that that is upper triangular; we are done.

U := ":
evalm( L1 &* L2 &* L3 &* U);


We have thus factorized . But,

evalm( L1 &* L2 &* L3 );

so the product is itself unit lower triangular.

L := ":
evalm( A - L&*U);

The trick with row exchanges is also important, but completely unconvincing unless you work it through yourself. You should prove to yourself that multiplying by a permutation matrix (i.e. the identity matrix with two rows exchanged) simply exchanges rows. Thus you can encode the action of row exchanges also as matrix multiplication; by a rather nifty bit of magic these row exchanges can all be placed at the front. Work through the details yourself. The end result is that any nonsingular matrix can be written as PA = LU where P is some permutation of the identity matrix, L is unit lower triangular, and U is upper triangular. We will return to this when we study the Row Echelon Decomposition.

next up previous
Next: Eigenvalue problems Up: Summary of relevant Previous: Cramer's rule

Robert Corless
Wed Jan 31 11:33:59 EST 1996