The Firedrake finite element library is usually used to discretize the weak forms of partial differential equations (PDEs), but here we want to experiment with integral equations. This is a bit awkward.
For now we consider only linear integral equations. The basic strategy is to use Firedrake tools to assemble as many parts of the discrete linear system as possible. The matrix corresponding to the integral operator would seem not to permit a single assemble()
command. Instead it is assembled column-by-column in a somewhat "by-hand" manner. There may well be a better approach.
TODO:
- 2D example
- what would Brandt & Lubrecht (1990) do?
A well-understood type of linear integral equation is a Fredholm "second kind" equation:
The weak form of the above equation is
V = FunctionSpace(mesh, 'CG', 1) # for example
u = TrialFunction(V)
v = TestFunction(V)
Mass = assemble(u * v * dx)
Likewise, the right-hand side of the weak form is standard:
x = SpatialCoordinate(mesh)[0]
b = assemble(f(x) * v * dx)
For the integral operator weak form, however, no Firedrake assembly expression is apparent.1
Instead we assemble a matrix column-by-column by expanding the trial function argument. That is, we expand Kmat
below:
phij = Function(V)
Y = Constant(0.0)
for j, xj in enumerate(V.mesh().coordinates.dat.data):
phij.dat.data[:] = 0.0
phij.dat.data[j] = 1.0
Y.assign(xj)
Kmat.setValues(range(V.dim()),[j,],
assemble(c * K(x,Y) * v * dx).dat.data)
Here the constant c
is the mesh spacing in 1D, and generally it is the volume of a cell centered at
Then the system matrix is set up and solved, here with KSP as we do in fredholm.py
:
Amat = Mass.M.handle + Kmat
ksp = PETSc.KSP()
ksp.create(PETSc.COMM_WORLD)
ksp.setOperators(A=Amat, P=Amat)
ksp.setFromOptions()
u = Function(V)
with u.dat.vec as vu:
with b.dat.vec_ro as vb:
ksp.solve(vb, vu)
Consider this non-symmetric kernel case:
The exact solution is
We have two solvers:
-
fredholm.py
: Implements the above approach, including the KSP solver usage. -
fredholm_numpy.py
: In this naive approach we turn everything into numpy arrays and callnumpy.linalg.solve()
on it. This approach is actually quite effective because everything is dense. However, it is inflexible with respect to the solver.
Warning
At high resolutions these demos should be run with a process/system monitor on top so bad cases can be killed. That is, kill processes if you either don't want to wait for completion or you are running out of memory. Also, reported timings include cache times; re-run for "real" solve times.
Here is the naive numpy approach with
python3 fredholm_numpy.py 1024
Once we use PETSc KSP we have a chance to compare solvers. First some
python3 fredholm.py 1024 -ksp_view
(This is direct because ILU is a direct solver for dense matrices!)
If we deliberately want LU:2
python3 fredholm.py 1024 -mat_type dense -ksp_type preonly -pc_type lu -pc_factor_mat_ordering_type natural
The following iterative methods should be
python3 fredholm.py 1024 -mat_type aij -ksp_type gmres -ksp_converged_reason -pc_type jacobi -ksp_rtol 1.0e-9
python3 fredholm.py 1024 -mat_type aij -ksp_type bcgs -ksp_converged_reason -pc_type jacobi -ksp_rtol 1.0e-9
(Also -pc_type kaczmarz
makes sense. ๐) These are perhaps headed toward being fastest at super high resolutions, but in fact the numpy method is still faster at the tested resolutions; e.g. compare with
Footnotes
-
This Fenics example takes essentially our approach here. An old post by Anders Logg suggests a more direct approach is not possible. โฉ
-
-pc_factor_mat_ordering_type natural
also avoids an apparent PETSc bug with the defaultnd
ordering. โฉ