5.3 Convergence

The Gauss-Seidel method was demonstrated in Sec. 5.2 using a sample problem, Eq. (5.2 ), that converged to within 0.2% accuracy in 9 solution sweeps. This section discusses the criteria for convergence of a solution.

The easiest explanation of convergence considers the error eqn for each equation eqn, defined in Sec. 5.2 . Substituting eqn in each term in Eq. (5.3 ), e.g. in Eq. (5.3a ), gives

( 1)ex + 1 = 1(6 + ( 2)ex + 2 + ( 2)ex + 3): 3 \relax \special {t4ht=
(5.5)
Since eqn , Eq. (5.5 ) reduces to
 1 = 1 2 + 1 -3: 3 3 \relax \special {t4ht=
(5.6)
The magnitude of error eqn is at least as large as the sum of the terms on the r.h.s., but is smaller if the signs of eqn and eqn are different, i.e.
 1 1 j 1j 3j 2j+ 3j 3j: \relax \special {t4ht=
(5.7)
Repeating for Eq. (5.3b ) and Eq. (5.3c ) gives:
pict\relax \special {t4ht=

The solution begins with an initial error of eqn, eqn and eqn. After one sweep the error is eqn, eqn and eqn.

PICT\relax \special {t4ht=

The error is quickly distributed evenly, such that eqn and eqn are almost identical at sweep 2. The errors continue to reduce since Eq. (5.7 ) and Eq. (5.8 ) guarantee that no error is greater than the average of the other errors.

Condition for convergence

The behaviour of this problem indicates a convergence condition for the Gauss-Seidel method: the magnitude of the diagonal coefficient in each matrix row must be greater than or equal to the sum of the magnitudes of the other coefficients in the row; in one row at least, the “greater than” condition must hold.

This is known as diagonal dominance, which is a sufficient condition for convergence, described mathematically as

 XN jai;ij jai;jj for all i; ji=⇔1j \relax \special {t4ht=
(5.9)
where the ‘eqn’ condition must be satisfied for at least one eqn. The description of the condition as “sufficient” means that convergence may occur when the condition is not met.
Notes on CFD: General Principles - 5.3 Convergence