Iterative Solver and an overview of Domain Decomposition Methods
Iterative Solver
First, let's talk about iterative solver. Some basic iterative techniques and their properties will be introuced.
Outline
Stationary iterative methods (relaxation, smoothing)
- Richardson
- Jacobi
- Gauss-siedel
- SOR
Projection methods
- Conjugate Gradient
- GMRES and variants
Basic iterative methodsβοΈ
The first iterative methods we will discuss are the basic iterative
methods. Basic iterative methods only use information of the previous
iteration. They are also called stationary method, or relaxation
methods or smoothing methods
πMost basic iterative methods can be constructed as a splitting of A :
A=MβN
M is usually chosen as a matrix that is easy to invert, whereas N is the rest.
The original linear system Ax=B plug in the decomposed expression, we can get (MβN)x=b. Reconstruct this formula, we obtain Mx=Nx+b. We introduce iteration metrics k and k+1 to represent the current step and next step solution estimates. So we get the iterative formula:
Mxk+1=Nxk+b
The above iterative process can be rewritten in an equivalent form to make it easier to understand and implement:
xk+1=Mβ1(Nxk+b)
where N=MβA, then we obtain:
xk+1=xk+Mβ1(bβAxk)
xk+1=xk+Mβ1rk
where rk=bβAxk is the residual for iterate k. And the Matrix M is a preconditioner.
πRichardson method
Let's take M=I as our preconditionner. Then N=IβA.
From basic iterative method, we know xk+1=xk+Mβ1rk. Then we use I to replace M:
xk+1β=xk+rk=xk+bβAxk=b+(IβA)xkβ
Initial guess x0=0, then we have:
βx1=bx2=b+(IβA)x1=b+(IβA)bx3=b+(IβA)x2=b+(IβA)b+(IβA)2bβ
We rewrite these items as:
xk+1=i=0βkβ(IβA)ib
This is actually the expression of our linear system Ax=b, so we can deduce βi=0kβ(IβA)i should represent Aβ1.
π₯We define inverse matrix as AAβ1=I, so let's have a look at βi=0ββ(IβA)i, if it exist. Let B=IβA
we construct AAβ1 by Aβi=0kβ(IβA)i, then we introduce B=IβA to this formula:
i=0βkβBi(IβB)
To evaluate this we need to use an emprical method:
For k=2:
===βi=0β2β(IβA)iA(I+B+B2)(IβB)I+B+B2βBβB2βB3IβB3β
Then we deduce:
i=0βkβBi(IβB)=IβBn+1
It is equivalent to:
Ai=0βkβ(IβA)i=Iβ(IβA)n+1
So if Richardson converges, when n goes to infinity, (IβA)n+1 should go to zero. That leads to our discussion about the convergence condition of Richardson.
πThe method will converge if (IβA)i converges. Let Ξ» be any eigen value of A,(IβA)i will converge if β£1βΞ»β£<1. In other word, we need Ο(IβA)=Ο(Mβ1N)<1, the spectral radius of the iterative matrix must be less than 1. If the eigen values are real, this means 0<Ξ»<2
The iterative process continues until a stopping criterion is reached, such as the norm of residuals is less than a given threshold, or the number of iterations reaches a given maximum.
The Richardson method is a simple iterative method, and its convergence mainly depends on the condition number of the matrix A. Generally, if the condition number of A is smaller, the Richardson method will converge faster. However, if the condition number of A is large, then convergence may be slow or may not even converge. In practical applications, convergence may be improved by choosing a different preprocessor or using other iterative methods
πJacobi Method