Iterative methods for a linear system   A X = B

Warning: the purpose of these examples is an illustration of the methods only, NOT efficient programming techniques


fixed point iterations for linear system X = U X + V:

>> U = [ 0.5  0.1 ; 0.2  0.3 ];  % a given matrix
>> V = [ 2 ; 3 ];                % a given vector
>> X = [ 6 ; 4 ];                % choose an initial value
>> X = U * X + V                 % repeat, until X stays unchanged

conversion of linear system A X = B to a system X = U X + V

>> A = [ 2 -1  0 ; -1 3 1 ; 0 1 2 ];  % system matrix
>> B = [ 1 ; 3 ; 3 ];                 % right hand side

Richardson iterations:

>> E = eye(3);            % identity matrix
>> c = 0.2;               % a suitable constant (depends on the matrix A) 
>> U = E - c * A          
>> V = c * B;  
>> sp = max(abs(eig(U)))  % spectral radius of U - criterion of convergence
>> X = V;                 % choose an initial value
>> X =  U * X + V         % repeat, until X stays unchanged
>> B - A * X              % residual - should be zero vector
Experiment with different choices of the constant c.
Check that for symmetric A the method converges if and only if max( |1 - c emin|, |1 - c emax|) is less than 1, emin and emax being the minimal and maximal eigenvalue of A, repectively.

Jacobi method:

>> D = diag(diag(A));       % diagonal of A
>> L = tril(A, -1);         % lower triangle of A (without the diagonal)
>> P = triu(A, 1);          % upper triangle of A (without the diagonal)
>> UJ = - inv(D) * (L + P)  % Jacobi  iteration matrix
>> VJ = inv(D) * B;         % changed right hand side
>> sp = max(abs(eig(UJ)))   % spectral radius of UJ - criterion of convergence
>> X = VJ;                  % choose an initial value
>> X = UJ * X + VJ          % repeat, until X stays unchanged
>> B - A * X                % residual - should be zero vector

Gauss-Seidel method:

>> D = diag(diag(A));       % diagonal of A
>> L = tril(A, -1);         % lower triangle of A (without the diagonal)
>> P = triu(A, 1);          % upper triangle of A (without the diagonal)
>> UG = - inv(D + L) * P    % Gauss-Seidel iteration matrix
>> VG = inv(D + L) * B;     % changed right hand side
>> sp = max(abs(eig(UG)))   % spectral radius of UG - criterion of convergence
>> X = VG;                  % choose an initial value
>> X = UG * X + VG          % repeat, until X stays unchanged
>> B - A * X                % residual - should be zero vector
Experiment with different choices of the matrix A, for instance
>> A = [ 2 -1  0 ; -1 -3 1 ; 0 1 2 ];
(Notice that Richardson method for this matrix does not converge for any constant c. Why?)