Gaussian Elimination with Pivoting
Addressing numerical instability, avoiding division by zero, and analyzing accuracy.
šÆ Lecture Goals
- 1 Identify why basic GE is "naive" (division by zero, errors).
- 2 Propose strategies: Partial Pivoting, Full Pivoting, Scaled Partial Pivoting.
- 3 Investigate cost and accuracy (Condition Number).
Recap: Naive GE
Forward Elimination + Backward Substitution.
Cost: ~ 2/3 n³ FLOPS
Why is it "Naive"?
Problem 1: Division by Zero
[ 4 5 6 ]
[ 7 8 9 ]
Pivot $a_{11} = 0$. Algorithm fails immediately.
Problem 2: Swamping
[ 1 1 ]
Small pivot causes loss of precision. With Naive GE, for $\varepsilon \approx 10^{-20}$, we might get $x_1 \approx 0$ (Wrong!) instead of $x_1 \approx 1$.
Detailed Failure Example
Consider this system where a zero appears in a pivot position during elimination:
After eliminating the first column, we get:
Pivot $a_{22}$ is zero! We must swap rows to continue.
Pivoting Strategies
Partial Pivoting
Strategy: Swap rows to put the largest element (in magnitude) of the current column onto the diagonal.
- Avoids division by zero.
- Minimizes roundoff error.
- Requires searching the column below the pivot.
- Usually sufficient for most problems.
Full (Complete) Pivoting
Strategy: Exchange both rows and columns to find the largest element in the entire remaining submatrix.
- More stable than partial pivoting.
- Costly: Searching the whole submatrix is expensive.
- Column swaps mess up the order of unknowns ($x_i$), requiring tracking.
Scaled Partial Pivoting (SPP)
Strategy: Simulate full pivoting without actually moving data. Pick pivot based on size relative to its row.
SPP Algorithm
The Process
- Calculate scale factors $s_i = \max_j |a_{ij}|$ for each row.
- Initialize index vector $\ell = [1, 2, \dots, n]$.
- At step $k$, select row $j$ (where $j \ge k$) that maximizes $\frac{|a_{\ell_j, k}|}{s_{\ell_j}}$.
- Swap $\ell_j$ and $\ell_k$ in the index vector.
- Eliminate using the "virtual" row order defined by $\ell$.
Geometric Interpretation
Lines intersect at exactly one point.
Parallel lines (Inconsistent).
Coincident lines.
Ill-Conditioned Systems
Two "nearly parallel" lines. Small changes in coefficients lead to huge changes in the intersection point.
Condition Number $\kappa(A)$
How sensitive is the solution $x$ to small changes in $b$ or $A$? We define the condition number:
Perturbation in $b$
Relative error in solution is amplified by $\kappa(A)$.
Rule of Thumb
If computations have precision $\epsm$, the solution is correct to roughly $d$ digits:
Residual vs. Error
Residual: $r = b - A\hat{x}$ (How close is the result to satisfying the equation?)
Error: $e = x - \hat{x}$ (How close is the result to the true solution?)
Warning: Small residual does NOT guarantee small error if $\kappa(A)$ is large.
š§ Final Challenge +20 XP
Why is Partial Pivoting generally preferred over Full Pivoting?
Video Explanation
For a step-by-step visual walkthrough of Gaussian Elimination with Partial Pivoting, check out this video:
Gaussian Elimination with Partial Pivoting
This video provides a clear, hand-calculated example of how partial pivoting swaps rows to avoid division by small numbers, directly reinforcing the concepts covered in this lecture.