Domain Decomposition and iterative Solvers - 4

Iterative Solver and an overview of Domain Decomposition Methods

Iterative Solver

First, let's talk about iterative solver. Some basic iterative techniques and their properties will be introuced.

Outline

Stationary iterative methods (relaxation, smoothing)

  • Richardson
  • Jacobi
  • Gauss-siedel
  • SOR

Projection methods

  • Conjugate Gradient
  • GMRES and variants
Basic iterative methods❗️

The first iterative methods we will discuss are the basic iterative
methods. Basic iterative methods only use information of the previous
iteration. They are also called stationary method, or relaxation
methods
or smoothing methods

πŸ‘†Most basic iterative methods can be constructed as a splitting of A :

A=Mβˆ’NA=Mβˆ’N

M is usually chosen as a matrix that is easy to invert, whereas N is the rest.

The original linear system Ax=BAx = B plug in the decomposed expression, we can get (Mβˆ’N)x=b(M - N) x = b. Reconstruct this formula, we obtain Mx=Nx+bM x=N x+b. We introduce iteration metrics kk and k+1k + 1 to represent the current step and next step solution estimates. So we get the iterative formula:

Mxk+1=Nxk+bM x^{k+1}=N x^k+b

The above iterative process can be rewritten in an equivalent form to make it easier to understand and implement:

xk+1=Mβˆ’1(Nxk+b)x^{k+1}=M^{-1}\left(N x^k+b\right)

where N=Mβˆ’AN = M - A, then we obtain:

xk+1=xk+Mβˆ’1(bβˆ’Axk)x^{k+1}=x^k+M^{-1}\left(b-A x^k\right)

xk+1=xk+Mβˆ’1rkx^{k+1}=x^k+M^{-1} r^k

where rk=bβˆ’Axk\mathbf{r}^k=\mathbf{b}-\mathbf{A} \mathbf{x}^k is the residual for iterate kk. And the Matrix M\mathbf{M} is a preconditioner.

🌟Richardson method

Let's take M=I\mathbf{M}=\mathbf{I} as our preconditionner. Then N=Iβˆ’A\mathbf{N}=\mathbf{I}-\mathbf{A}.

From basic iterative method, we know xk+1=xk+Mβˆ’1rk\mathbf{x}^{k+1}=\mathbf{x}^k+\mathbf{M}^{-1} \mathbf{r}^k. Then we use II to replace MM:

xk+1=xk+rk=xk+bβˆ’Axk=b+(Iβˆ’A)xk\begin{aligned} \mathbf{x}^{k+1} & =\mathbf{x}^k+\mathbf{r}^k \\ & =\mathbf{x}^k+\mathbf{b}-\mathbf{A} \mathbf{x}^k \\ & =\mathbf{b}+(\mathbf{I}-\mathbf{A}) \mathbf{x}^k \end{aligned}

Initial guess x0=0\mathbf{x}^0=\mathbf{0}, then we have:

x1=bx2=b+(Iβˆ’A)x1=b+(Iβˆ’A)bx3=b+(Iβˆ’A)x2=b+(Iβˆ’A)b+(Iβˆ’A)2b\begin{aligned} & \mathbf{x}^1=\mathbf{b} \\ & \mathbf{x}^2=\mathbf{b}+(\mathbf{I}-\mathbf{A}) \mathbf{x}^1=\mathbf{b}+(\mathbf{I}-\mathbf{A}) \mathbf{b} \\ & \mathbf{x}^3=\mathbf{b}+(\mathbf{I}-\mathbf{A}) \mathbf{x}^2=\mathbf{b}+(\mathbf{I}-\mathbf{A}) \mathbf{b}+(\mathbf{I}-\mathbf{A})^2 \mathbf{b} \end{aligned}

We rewrite these items as:

xk+1=βˆ‘i=0k(Iβˆ’A)ib\mathbf{x}^{k+1}=\sum_{i=0}^k(\mathbf{I}-\mathbf{A})^i \mathbf{b}

This is actually the expression of our linear system Ax=bA x = b, so we can deduce βˆ‘i=0k(Iβˆ’A)i\sum_{i=0}^k(\mathbf{I}-\mathbf{A})^i should represent Aβˆ’1A^{-1}.

πŸ”₯We define inverse matrix as AAβˆ’1=IA A^{-1} = I, so let's have a look at βˆ‘i=0∞(Iβˆ’A)i\sum_{i=0}^{\infty}(\mathbf{I}-\mathbf{A})^i, if it exist. Let B=Iβˆ’A\mathbf{B}=\mathbf{I}-\mathbf{A}

we construct AAβˆ’1A A^{-1} by Aβˆ‘i=0k(Iβˆ’A)i\mathbf{A} \sum_{i=0}^k(\mathbf{I}-\mathbf{A})^i, then we introduce B=Iβˆ’A\mathbf{B} = \mathbf{I} - \mathbf{A} to this formula:

βˆ‘i=0kBi(Iβˆ’B)\sum_{i=0}^k\mathbf{B}^i (\mathbf{I} - \mathbf{B})

To evaluate this we need to use an emprical method:

For k=2k = 2:

βˆ‘i=02(Iβˆ’A)iA=(I+B+B2)(Iβˆ’B)=I+B+B2βˆ’Bβˆ’B2βˆ’B3=Iβˆ’B3\begin{aligned} & \sum_{i=0}^2(I-A)^i A \\ = & \left(I+B+B^2\right)(I-B) \\ = & I+B+B^2-B-B^2-B^3 \\ = & I-B^3 \end{aligned}

Then we deduce:

βˆ‘i=0kBi(Iβˆ’B)=Iβˆ’Bn+1\sum_{i=0}^k\mathbf{B}^i (\mathbf{I} - \mathbf{B}) = \mathbf{I} - \mathbf{B}^{n+1}

It is equivalent to:

Aβˆ‘i=0k(Iβˆ’A)i=Iβˆ’(Iβˆ’A)n+1\mathbf{A} \sum_{i=0}^k(\mathbf{I}-\mathbf{A})^i = \mathbf{I} - (\mathbf{I} - \mathbf{A})^{n+1}

So if Richardson converges, when nn goes to infinity, (Iβˆ’A)n+1(\mathbf{I} - \mathbf{A})^{n+1} should go to zero. That leads to our discussion about the convergence condition of Richardson.

🌈The method will converge if (Iβˆ’A)i(\mathbf{I}-\mathbf{A})^i converges. Let Ξ»\lambda be any eigen value of A,(Iβˆ’A)i\mathbf{A},(\mathbf{I}-\mathbf{A})^i will converge if ∣1βˆ’Ξ»βˆ£<1|1-\lambda|<1. In other word, we need ρ(Iβˆ’A)=ρ(Mβˆ’1N)<1\rho(\mathbf{I}-\mathbf{A})=\rho\left(\mathbf{M}^{-1} \mathbf{N}\right)<1, the spectral radius of the iterative matrix must be less than 1. If the eigen values are real, this means 0<Ξ»<20<\lambda<2

The iterative process continues until a stopping criterion is reached, such as the norm of residuals is less than a given threshold, or the number of iterations reaches a given maximum.

The Richardson method is a simple iterative method, and its convergence mainly depends on the condition number of the matrix A. Generally, if the condition number of A is smaller, the Richardson method will converge faster. However, if the condition number of A is large, then convergence may be slow or may not even converge. In practical applications, convergence may be improved by choosing a different preprocessor or using other iterative methods

🌟Jacobi Method