Code Monkey home page Code Monkey logo

dolfinx_mpc's People

Contributors

conpierce8 avatar dependabot[bot] avatar finsberg avatar fmonteghetti avatar garth-wells avatar jorgensd avatar minrk avatar nate-sime avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dolfinx_mpc's Issues

Rigidity constraint

Thanks a lot for the work on this library, which is really useful.

I am trying to use dolfinx_mpc to impose a boundary to have the same displacement (something like this)

In particular for a linear elasticity problem with a vector-valued displacement field I want only to have "rigidity" on one of the components.

I managed to obtain the results with the following method. However, this is very slow and surely not the good way of doing it.
Could you suggest me how to proceed?

def create_rigid_constraint(mpc, dofs, subspace=None):
    dof_coordinates = mpc.V.tabulate_dof_coordinates()[dofs, :]
    for i in range(len(dof_coordinates) - 1):
        mpc.create_general_constraint(
            {dof_coordinates[i + 1, :].tobytes(): {dof_coordinates[0, :].tobytes(): 1}},
            subspace,
            subspace,
        )
right_facets = locate_entities_boundary(mesh, 1, right_boundary)
rigid_dofs = fem.locate_dofs_topological(V, 1, right_facets)
mpc = dolfinx_mpc.MultiPointConstraint(V)
    create_rigid_constraint(mpc, rigid_dofs, 0)

Besides, when considering periodic boundary conditions could you consider adding an option to impose it only on one of the components of a vector-valued field. This would be also a possible solution to the problem above, because "rigidity" can easily be written with a particular periodicity map.

I think that this "rigidity" constraint is very useful in many applications as shown also by the several questions on the forums about that. Maybe it could be added between the built-in methods off MultiPointConstraint?

For completeness, I give below also a full code for a simple example to play with

import dolfinx.fem as fem
import dolfinx_mpc.utils
import numpy as np

from dolfinx.common import Timer, list_timings, TimingType
from dolfinx.generation import UnitSquareMesh
from dolfinx.io import XDMFFile
from dolfinx.mesh import locate_entities_boundary
from mpi4py import MPI
from petsc4py import PETSc
from ufl import (Identity,
    SpatialCoordinate,
    TestFunction,
    TrialFunction,
    as_vector,
    dx,
    grad,
    inner,
    sym,
    tr,
)


mesh = UnitSquareMesh(MPI.COMM_WORLD, 50, 50)
V = fem.VectorFunctionSpace(mesh, ("Lagrange", 1))

# Generate Dirichlet BC on lower boundary (Fixed)
u_bc = fem.Function(V)

with u_bc.vector.localForm() as u_local:
    u_local.set(0.0)


def left_boundary(x):
    return np.isclose(x[0], np.finfo(float).eps)


left_facets = locate_entities_boundary(mesh, 1, left_boundary)
topological_dofs = fem.locate_dofs_topological(V, 1, left_facets)
bc = fem.DirichletBC(u_bc, topological_dofs)
bcs = [bc]

# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)

# Elasticity parameters
E = PETSc.ScalarType(1.0)
nu = 0.3
mu = fem.Constant(mesh, E / (2.0 * (1.0 + nu)))
lmbda = fem.Constant(mesh, E * nu / ((1.0 + nu) * (1.0 - 2.0 * nu)))

# Stress computation
def sigma(v):
    return 2.0 * mu * sym(grad(v)) + lmbda * tr(sym(grad(v))) * Identity(len(v))


x = SpatialCoordinate(mesh)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
a_form = inner(sigma(u), grad(v)) * dx
L_form = inner(as_vector((10 * (x[1] - 0.5) ** 2, 0)), v) * dx

# Create MPC for imposing u[1] constant on the bottom face
def right_boundary(x):
    return np.isclose(x[0], 1.0)


right_facets = locate_entities_boundary(mesh, 1, right_boundary)
rigid_dofs = fem.locate_dofs_topological(V, 1, right_facets)


def create_rigid_constraint(mpc, dofs, subspace=None):
    dof_coordinates = mpc.V.tabulate_dof_coordinates()[dofs, :]
    for i in range(len(dof_coordinates) - 1):
        mpc.create_general_constraint(
            {dof_coordinates[i + 1, :].tobytes(): {dof_coordinates[0, :].tobytes(): 1}},
            subspace,
            subspace,
        )

with Timer("~RIGID: Initialize MPC"):
    mpc = dolfinx_mpc.MultiPointConstraint(V)
    create_rigid_constraint(mpc, rigid_dofs, 0)
    mpc.finalize()

problem = dolfinx_mpc.LinearProblem(a_form, L_form, mpc, bcs=bcs)
solver = problem.solver
u_h = problem.solve()
u_h.name = "u"
outfile = XDMFFile(MPI.COMM_WORLD, "rigid_boundary.xdmf", "w")
outfile.write_mesh(mesh)
outfile.write_function(u_h)

list_timings(MPI.COMM_WORLD, [TimingType.wall])

Problem with periodic condition on small physical domains

Greetings!

I found a bug in the periodic boundary condition that occurs on meshes with a small element size. Here is the test case https://gist.github.com/Alchem334/0f3d0c9b7805d4a1e5438c41c3dfee3e

In the test case, the Laplace equation is solved on a square mesh of size 2e-5 x 2e-5 with a periodic condition on the faces x = 0 and x = 2e-5. After constructing the MultiPointConstraint object, you can see the number of degrees of freedom for which the periodic boundary condition is specified. In case of (20,20) elements on the mesh, number of dofs mpc.num_local_slaves is equal 20, which is not correct, there should be 21 degrees of freedom. With mesh of (10,10) elements, the periodic condition is constructed correctly with mpc.num_local_slaves=11.

The loss of one degree of freedom in the periodic condition affects the results. In the case of the (10,10) grid, the values in the upper right and upper left corners are the same.

10x10_mesh_correct

In the case of the grid (20,20) they are different.

20x20_mesh_non_matching_values

The problem occurs in the compute_distance_gjk function, which calculates the distance vector from a point to a cell.

https://github.com/FEniCS/dolfinx/blob/74facfc3a587be94a8936bec0aece8b79d92f62e/cpp/dolfinx/geometry/gjk.cpp#L238-L253

In the case of a point lying on a cell, this function should return a zero vector, but on a grid (20,20) cells for one of the degrees of freedom, it returns a non-zero vector, apparently with a length of 1e-6. This is because the exit criterion specified by eps is met too early. If you reduce the value of eps for example to 1e-20 then no error occurs.

The compute_distance_gjk function is found in dolfinx and is called in dolfinx_mpc as follows. This function is used in the shortest_vector function to find the shortest vector, which is used in the squared_distance function to find the squared distance. Based on the result of the squared_distance function, a decision is made whether to look for the degree of freedom in a given cell. This happens in the compute_colliding_cells function.

dolfinx_mpc/cpp/utils.cpp

Lines 1011 to 1020 in f7809a6

const std::vector<double> distances_sq
= dolfinx::geometry::squared_distance(mesh, tdim, cells, _point);
// Only push back closest cell
if (auto cell_idx
= std::min_element(distances_sq.cbegin(), distances_sq.cend());
*cell_idx < eps2)
{
auto pos = std::distance(distances_sq.cbegin(), cell_idx);
colliding_cells.push_back(cells[pos]);
}

The criterion is the squared distance less than eps2 = 1e-20. Since the square of the distance is equal to 1e-12, this condition is not met and for one degree of freedom no cells will be found at all in which its pair could be located.

dolfinx_mpc/cpp/utils.cpp

Lines 1040 to 1052 in f7809a6

// Compute exact collision
auto cell_collisions = dolfinx_mpc::compute_colliding_cells(
mesh, bbox_collisions, points, eps2);
// Extract first collision
std::vector<std::int32_t> collisions(points.shape(0), -1);
for (int i = 0; i < cell_collisions.num_nodes(); i++)
{
auto local_cells = cell_collisions.links(i);
if (!local_cells.empty())
collisions[i] = local_cells[0];
}
return collisions;

Finally because of this the find_local_collisions function returns -1 for this degree of freedom instead of the cell id.

I'm not sure how to fix this error once and for all. For my purposes, I simply choose eps in the compute_distance_gjk function such that there is a corresponding pair for all periodic degrees of freedom. I'm not sure you can set eps = 1e-200 and forget about this problem. In this case, some more bugs may come out.

Use Hermitian transpose for complex-valued constraint

When using complex values, it is advantageous to pre-multiply the system by the Hermitian transpose of the prolongation matrix rather than the standard transpose, as this preserves the Hermitian character of the original matrix, i.e. applying the prolongation matrix as
$$K^H A K = K^H b$$
yields a Hermitian system matrix $K^H A K$ if $A$ is Hermitian, whereas applying the prolongation matrix as
$$K^T A K = K^T b$$
yields a non-Hermitian matrix if $K$ has complex values. (Note $K^H = \overline{K^T}$, where $\overline{K}$ is the complex conjugate of $K$.)

This should probably be the default behavior for complex numbers, since destroying the Hermitian character of $A$ is not likely to be advantageous to users. This behavior could, however, be enabled/disabled by the user, e.g. by adding an option to MultiPointConstraint (which would be ignored for real-valued builds):

class MultiPointConstraint():
    ...
    def __init__(self, V: _fem.FunctionSpace, use_hermitian_transpose: bool = True):
        ...

compiling issue against latest dolfinx

I get this error when trying to compile dolfinx_mpc on the latest docker image of dolfinx.
Is there any update required?

root@917b668df87f:~/shared/dolfinx_mpc# ninja -j3 install -C build-dir
ninja: Entering directory `build-dir'
[1/9] Building CXX object CMakeFiles/dolfinx_mpc.dir/SlipConstraint.cpp.o
FAILED: CMakeFiles/dolfinx_mpc.dir/SlipConstraint.cpp.o 
/usr/bin/c++ -DBOOST_ALL_NO_LIB -DBOOST_CHRONO_DYN_LINK -DBOOST_TIMER_DYN_LINK -DDOLFINX_MPC_VERSION=\"0.3.1.0\" -DDOLFINX_VERSION=\"0.3.1.0\" -DHAS_ADIOS2 -DHAS_PTSCOTCH -DHAS_SLEPC -Ddolfinx_mpc_EXPORTS -I/root/shared/dolfinx_mpc/cpp -I/root/shared/dolfinx_mpc/cpp/dolfinx_mpc -isystem /usr/local/lib/python3.9/dist-packages/ffcx/codegeneration -isystem /usr/local/petsc/linux-gnu-real-32/include -isystem /usr/local/petsc/include -isystem /usr/local/dolfinx-real/include -isystem /usr/local/slepc/linux-gnu-real-32/include -isystem /usr/local/slepc/include -O3 -DNDEBUG -fPIC -std=c++17 -MD -MT CMakeFiles/dolfinx_mpc.dir/SlipConstraint.cpp.o -MF CMakeFiles/dolfinx_mpc.dir/SlipConstraint.cpp.o.d -o CMakeFiles/dolfinx_mpc.dir/SlipConstraint.cpp.o -c /root/shared/dolfinx_mpc/cpp/SlipConstraint.cpp
/root/shared/dolfinx_mpc/cpp/SlipConstraint.cpp: In function 'dolfinx_mpc::mpc_data dolfinx_mpc::create_slip_condition(std::vector<std::shared_ptr<dolfinx::fem::FunctionSpace> >, dolfinx::mesh::MeshTags<int>, int32_t, std::shared_ptr<dolfinx::fem::Function<double> >, tcb::span<const int>, std::vector<std::shared_ptr<const dolfinx::fem::DirichletBC<double> > >)':
/root/shared/dolfinx_mpc/cpp/SlipConstraint.cpp:31:25: error: 'using element_type = const class dolfinx::mesh::Mesh' {aka 'const class dolfinx::mesh::Mesh'} has no member named 'comm'
   31 |   MPI_Comm comm = mesh->comm();
      |                         ^~~~
[2/9] Building CXX object CMakeFiles/dolfinx_mpc.dir/utils.cpp.o
FAI

Another point, I think dokken92/dolfinx_mpc:0.3.0 should be dokken92/dolfinx_mpc:v0.3.0

Compiling v0.4.1: Trouble with hdf5

I set up an virtual environment via conda and installed dolfinx 0.4.1 there. While having activated that environment, I was trying to compile dolfinx_mpc (the tagged commit "v0.4.1"). I'm on Ubuntu 21.04 (I know, I should update...)

At first, cmake complained about not being able to find "FindHDF5.cmake". I had to manually add the system's cmake-installation's module path again: set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "/usr/share/cmake-3.18/Modules/"). I realize that is maybe an issue because of conda. Is this the case? Are there any other workarounds?

Anyway, that problem is solved now, but now I think the problem is that some component needs the static hdf5-library (libhdf5_hl.a), while conda install h5py only installs the dynamic library (libhdf5_hl.so, I've confirmed the existence of this file). Below are the error messages. Did I understand the issue correctly? Because if yes, I don't think there's much the maintainers can do, it's a problem caused by using conda, and I probably need to compile a virtualenv-friendly hdf5/h5py...

[...]
-- Adding boost_filesystem dependencies: headers
-- Adding /usr/lib/python3/dist-packages/basix to Basix search hints
x86_64-conda-linux-gnu-cc: error: .../condaenvs/fenicsenv/lib/libhdf5_hl.a: No such file or directory
x86_64-conda-linux-gnu-cc: error: .../condaenvs/fenicsenv/lib/libhdf5.a: No such file or directory
-- HDF5 C compiler wrapper is unable to compile a minimal HDF5 program.
CMake Warning at /usr/share/cmake-3.18/Modules/FindHDF5.cmake:726 (message):
  HDF5 found for language C is not parallel but previously found language is
  parallel.
Call Stack (most recent call first):
  /usr/share/cmake-3.18/Modules/CMakeFindDependencyMacro.cmake:47 (find_package)
  .../condaenvs/fenicsenv/lib/cmake/dolfinx/DOLFINXConfig.cmake:76 (find_dependency)
  CMakeLists.txt:93 (find_package)


-- HDF5_DIR: HDF5_DIR-NOTFOUND
-- HDF5_DEFINITIONS: 
-- HDF5_INCLUDE_DIRS: .../condaenvs/fenicsenv/include
-- HDF5_LIBRARIES: .../condaenvs/fenicsenv/lib/libhdf5.so
-- HDF5_HL_LIBRARIES: 
-- HDF5_C_DEFINITIONS: 
-- HDF5_C_INCLUDE_DIR: .../condaenvs/fenicsenv/include
-- HDF5_C_INCLUDE_DIRS: .../condaenvs/fenicsenv/include
-- HDF5_C_LIBRARY: 
-- HDF5_C_LIBRARIES: .../condaenvs/fenicsenv/lib/libhdf5.so
-- HDF5_C_HL_LIBRARY: 
-- HDF5_C_HL_LIBRARIES: 
-- Found MPI_C: .../condaenvs/fenicsenv/lib/libmpi.so (found suitable version "4.0", minimum required is "3") 
-- Found MPI_CXX: .../condaenvs/fenicsenv/lib/libmpicxx.so (found suitable version "4.0", minimum required is "3") 
-- Found MPI: TRUE (found suitable version "4.0", minimum required is "3")  
-- The following features have been enabled:

 * BUILD_SHARED_LIBS, Build DOLFINX_MPC with shared libraries.
 * CMAKE_INSTALL_RPATH_USE_LINK_PATH, Add paths to linker search and installed rpath.

-- The following OPTIONAL packages have been found:

 * Python3

-- The following REQUIRED packages have been found:

 * PythonInterp (required version >= 3)
 * boost_chrono (required version == 1.74.0)
 * boost_timer (required version == 1.74.0)
 * boost_headers (required version == 1.74.0)
 * boost_filesystem (required version == 1.74.0)
 * DOLFINX (required version >= 0.4.1), New generation Dynamic Object-oriented Library for - FINite element computation, <https://github.com/FEniCS/dolfinx>
   Main dependency of library
 * Basix (required version >= 0.4.1)
 * MPI (required version >= 3)
 * PkgConfig
 * PETSc, Portable, Extensible Toolkit for Scientific Computation (PETSc), <https://www.mcs.anl.gov/petsc/>
   PETSc linear algebra backend

CMake Warning at CMakeLists.txt:142 (find_package): ### !!! <--- I added some lines, it's your line 137!
  By not providing "Findhdf5.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "hdf5", but
  CMake did not find one.

  Could not find a package configuration file provided by "hdf5" with any of
  the following names:

    hdf5Config.cmake
    hdf5-config.cmake

  Add the installation prefix of "hdf5" to CMAKE_PREFIX_PATH or set
  "hdf5_DIR" to a directory containing one of the above files.  If "hdf5"
  provides a separate development package or SDK, be sure it has been
  installed.


-- Configuring done
CMake Error at CMakeLists.txt:144 (add_library):
  Target "dolfinx_mpc" links to target "hdf5::hdf5" but the target was not
  found.  Perhaps a find_package() call is missing for an IMPORTED target, or
  an ALIAS target is missing?


CMake Error at CMakeLists.txt:144 (add_library):
  Target "dolfinx_mpc" links to target "hdf5::hdf5" but the target was not
  found.  Perhaps a find_package() call is missing for an IMPORTED target, or
  an ALIAS target is missing?


-- Generating done
CMake Generate step failed.  Build files cannot be regenerated correctly.

Nodes belonging to multiple periodic boundary conditions are improperly handled

Hi Jørgen,

I came across this issue while investigating regression in some of my models.

Issue: Problems with multiple periodic boundary conditions may not be accurately discretized if there are nodes common to several periodic boundaries.

Reproduction steps: Run the attached demo_periodic_gep_modified.py in dokken92/dolfinx_mpc:v0.4.1 and

  • compare the computed eigenvalues to the exact ones
  • compare the computed eigenvectors (eigenvectors_with_dolfinx_mpc.vtu) to the ones obtained with fenics (eigenvectors_with_fenics.xdmf).

Background:
demo_periodic_gep_modified.py computes the eigenvalues of the Laplacian on the unit square with two periodic boundary conditions: the left edge is mapped to the right edge and the bottom edge is mapped to the top edge. The four corner nodes (0,0), (0,1), (1,0), and (1,1) belong to two different periodic boundary conditions.

Comparing the fenics and dolfinx_mpc results shows that dolfinx_mpc is accurate only for eigenfunctions that are null at all of the corner nodes, while fenics has no such issue. This can be seen on the first eigenfunction u_00, which should be constant, as well as on u_04 and u_08.

Observations and suggestions:

While investigating this I have made two observations:

  • Running demo_periodic_gep_modified.py in an older version of dolfinx_mpc [1] leads to the same problem but milder.
  • All my models with two periodic boundary conditions regressed between [1] and v0.4.1. All of them have spurious behavior at corner nodes. Most of them have a different sparsity pattern in v0.4.1.

[1] dolfinx 2019.2.9.99 / dolfinx_mpc 4c9b04c.

Here are some uneducated suggestions:

  • The logic is correct but there is a loose tolerance somewhere that manifests itself more sensibly in newer version of dolfinx/dolfinx_mpc.
  • There is an issue in the logic that can overlook nodes belonging to multiple periodic boundary conditions.
  • create_periodic_constraint_topological() is not designed to be called twice, and an alternative strategy should be used to deal with multiple periodic boundary conditions.

Attachment: issue-double-pbc.zip
- demo_periodic_gep_modified.py: modification of the demo demo_periodic_gep.py available in dokken92/dolfinx_mpc:v0.4.1 to use two periodic boundary conditions instead of one.
- eigenvectors_with_dolfinx_mpc.vtu: eigenfunctions computed by running demo_periodic_gep_modified.py in dokken92/dolfinx_mpc:v0.4.1.
- eigenvectors_with_fenics.xdmf: eigenfunctions computed using fenics on the same problem.

Investigate issue with following example

import dolfinx
import dolfinx_mpc
from dolfinx.fem import FunctionSpace, Function, Constant, assemble_scalar, form
from dolfinx.mesh import locate_entities_boundary, meshtags

from ufl import grad, div, inner, Measure, TrialFunctions, TestFunctions, FiniteElement, VectorElement
import ufl
from petsc4py import PETSc
import numpy as np
from mpi4py import MPI
# MPI initialization
comm = MPI.COMM_WORLD
rank = MPI.COMM_WORLD.rank

#############################################
#              Mesh generation              #
#############################################
domain = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 50, 50, dolfinx.mesh.CellType.quadrilateral)
gdim = 2
fdim = 1
##  cells for different subdomains
def cells_1(x):
    return np.logical_or(np.logical_or(x[0] > 0.6, x[0] < 0.4), np.logical_or(x[1] > 0.6, x[1] < 0.4))
def cells_2(x):
    return np.logical_and(np.logical_and(x[0] <= 0.6, x[1] <= 0.6), np.logical_and(x[0] >= 0.4, x[1] >= 0.4))

# locate entities
cells_1_dom = dolfinx.mesh.locate_entities(domain, domain.topology.dim, cells_1)
cells_2_dom = dolfinx.mesh.locate_entities(domain, domain.topology.dim, cells_2)

## create cell tags
marked_cells = np.hstack([cells_1_dom, cells_2_dom])
marked_values = np.hstack([np.full_like(cells_1_dom, 1), np.full_like(cells_2_dom, 2)])
sorted_cells = np.argsort(marked_cells)
ct = dolfinx.mesh.meshtags(domain, gdim, marked_cells[sorted_cells], marked_values[sorted_cells])

## create facet tag at interface
def create_interface(facet_marker: np.typing.NDArray[np.int32],
                     tag: int,
                     fa: np.typing.NDArray[np.int32],
                     fo: np.typing.NDArray[np.int32],
                     cell_marker: np.typing.NDArray[np.int32]):
    for f in range(len(facet_marker)):
        cells = fa[fo[f]:fo[f + 1]]
        c_val = cell_marker[cells]
        if len(np.unique(c_val)) == 2:
            facet_marker[f] = tag


# Cold start of function
create_interface(np.empty(0, dtype=np.int32), 0, np.empty(
    0, dtype=np.int32), np.empty(0, dtype=np.int32), np.empty(0, dtype=np.int32))

cell_tag_1 = np.unique(ct.values)[0]
cell_tag_2 = np.unique(ct.values)[1]

domain.topology.create_connectivity(domain.topology.dim - 1, domain.topology.dim)
f_to_c = domain.topology.connectivity(domain.topology.dim - 1, domain.topology.dim)
fa = f_to_c.array
fo = f_to_c.offsets

facet_map = domain.topology.index_map(domain.topology.dim - 1)
num_facets = facet_map.size_local + facet_map.num_ghosts
facet_marker = np.zeros(num_facets, dtype=np.int32)
cell_map = domain.topology.index_map(domain.topology.dim)
num_cells = cell_map.size_local + cell_map.num_ghosts
cell_marker = np.full(num_cells, cell_tag_2, dtype=np.int32)
cell_marker[ct.indices] = ct.values

interface_marker = 9
create_interface(facet_marker, interface_marker, fa, fo, cell_marker)
ft = meshtags(domain, domain.topology.dim - 1,
                     np.arange(num_facets), facet_marker)

dx = Measure('dx', domain=domain, subdomain_data=ct)
#############################################
#             Mesh generation finished      #
#############################################



# elements and funcionspace
###########################

# Taylor-Hood elements and Mixed Functionspace
P1 = FiniteElement("CG", domain.ufl_cell(), 1)
P2 = VectorElement("CG", domain.ufl_cell(), 2)
TH = ufl.MixedElement([P2, P1])
V = FunctionSpace(domain, TH)

# define function, trial function and test function
(ve, pr) = TrialFunctions(V)
(dve, dpr) = TestFunctions(V)

# define material parameter
S = FunctionSpace(domain, ("DG", 0))
vis = Function(S)
fluid_visc = ct.find(1)
vis.x.array[fluid_visc] = np.full_like(fluid_visc, 1, dtype=PETSc.ScalarType)
inclusion_visc = ct.find(2)
vis.x.array[inclusion_visc]  = np.full_like(inclusion_visc, 1e+5, dtype=PETSc.ScalarType)
vis.x.scatter_forward()

# boundary conditions
#####################
## set all fluid velocities at the interface to zero (no slip boundary condition)
# create collapsed functionspace
V_coll0, _ = V.sub(0).collapse()
v_bc = Function(V_coll0)
v_bc.x.array[:] = 0.0
v_bc.x.scatter_forward()
# dofs of interface and dirichlet bcs
dofs_v_interf = dolfinx.fem.locate_dofs_topological((V.sub(0), V_coll0)  , ft.dim, ft.find(9))  # interface tag: 9
bcs_v_interf = dolfinx.fem.dirichletbc(v_bc, dofs_v_interf, V.sub(0))

## set pressures at the corners to zero
def boundary_node(x):
    return np.logical_or(
        np.logical_and(
            np.isclose(x[0], 0.),
            np.logical_or(
                np.isclose(x[1], .0),
                np.isclose(x[1], 1.0),
            )
        ),
        np.logical_and(
            np.isclose(x[0], 1.0),
            np.logical_or(
                np.isclose(x[1], 0.),
                np.isclose(x[1], 1.0),
            )
        ),
    )

boundary_facets = locate_entities_boundary(domain, 0, boundary_node) # it is zero dimension for a point
dofs_corners = dolfinx.fem.locate_dofs_topological(V.sub(1), 0, boundary_facets)  # it is zero dimension for a point
bc_corners = dolfinx.fem.dirichletbc(PETSc.ScalarType(0.0), dofs_corners, V.sub(1))

bcs = [bcs_v_interf, bc_corners]

## periodic boundary conditions
pbc_directions = []
pbc_slave_tags = []
pbc_is_slave = []
pbc_is_master = []
pbc_meshtags = []
pbc_slave_to_master_maps = []

mpc = dolfinx_mpc.MultiPointConstraint(V)

def generate_pbc_slave_to_master_map(i):
    def pbc_slave_to_master_map(x):
        out_x = x.copy()
        out_x[i] = x[i] - 1.0
        return out_x
    return pbc_slave_to_master_map

def generate_pbc_is_slave(i):
    return lambda x: np.isclose(x[i], 1.0)

def generate_pbc_is_master(i):
    return lambda x: np.isclose(x[i], 0.0)

for i in range(gdim):

    pbc_directions.append(i)
    pbc_slave_tags.append(i + 2)
    pbc_is_slave.append(generate_pbc_is_slave(i))
    pbc_is_master.append(generate_pbc_is_master(i))
    pbc_slave_to_master_maps.append(generate_pbc_slave_to_master_map(i))

    facets = locate_entities_boundary(domain, fdim, pbc_is_slave[-1])
    arg_sort = np.argsort(facets)
    pbc_meshtags.append(meshtags(domain,
                                 fdim,
                                 facets[arg_sort],
                                 np.full(len(facets), pbc_slave_tags[-1], dtype=np.int32)))

N_pbc = len(pbc_directions)
for i in range(N_pbc):
    if N_pbc > 1:
        def pbc_slave_to_master_map(x):
            out_x = pbc_slave_to_master_maps[i](x)
            idx = pbc_is_slave[(i + 1) % N_pbc](x)
            out_x[pbc_directions[i]][idx] = np.nan
            return out_x
    else:
        pbc_slave_to_master_map = pbc_slave_to_master_maps[i]

    for ij in range(gdim):
        mpc.create_periodic_constraint_topological(V.sub(0).sub(ij), pbc_meshtags[i],
                                                   pbc_slave_tags[i],
                                                   pbc_slave_to_master_map,
                                                   bcs)

    mpc.create_periodic_constraint_topological(V.sub(1), pbc_meshtags[i],
                                                   pbc_slave_tags[i],
                                                   pbc_slave_to_master_map,
                                                   bcs)

if len(pbc_directions) > 1:
    def pbc_slave_to_master_map(x):
        out_x = x.copy()
        out_x[0] = x[0] - domain.geometry.x[:, 0].max()
        out_x[1] = x[1] - domain.geometry.x[:, 1].max()
        idx = np.logical_and(pbc_is_slave[0](x), pbc_is_slave[1](x))
        out_x[0][~idx] = np.nan
        out_x[1][~idx] = np.nan
        return out_x

    for ij in range(gdim):
        mpc.create_periodic_constraint_topological(V.sub(0).sub(ij), pbc_meshtags[1],
                                                   pbc_slave_tags[1],
                                                   pbc_slave_to_master_map,
                                                   bcs)

    mpc.create_periodic_constraint_topological(V.sub(1), pbc_meshtags[1],
                                               pbc_slave_tags[1],
                                               pbc_slave_to_master_map,
                                               bcs)

mpc.finalize()

# Variational problem
#####################
f_constant = Constant(domain, PETSc.ScalarType((1,0)))

## weak formulation
a_f = form(vis * inner(grad(ve), grad(dve)) * dx + pr * div(dve) * dx + div(ve) * dpr * dx)
L_f = form(inner(f_constant, dve) * dx)

# solution function
func_sol = Function(V)

## iterative solver
A = dolfinx_mpc.assemble_matrix(a_f, mpc, bcs=bcs)
A.assemble()
b = dolfinx_mpc.assemble_vector(L_f, mpc)
dolfinx_mpc.apply_lifting(b, [a_f], [bcs], mpc)
b.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE)
dolfinx.fem.petsc.set_bc(b, bcs)
b.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)

# Preconditioner matrix form
Pf = dolfinx.fem.form(ufl.inner(ufl.grad(ve), ufl.grad(dve)) * dx + pr * dpr * dx)
P = dolfinx_mpc.assemble_matrix(Pf, mpc, bcs=bcs)
P.assemble()

# solver settings
ksp = PETSc.KSP().create(domain.comm)
ksp.setOperators(A, P)
ksp.setType(PETSc.KSP.Type.MINRES)
pc = ksp.getPC()
pc.setType("hypre")
pc.setHYPREType("boomeramg")
pc.setUp()

# solving and backsubstitution to obtain the values at the slave degrees of freedom
ksp.solve(b, func_sol.vector)
func_sol.vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)
mpc.backsubstitution(func_sol)

# collapse mixed functionspace
ve = func_sol.sub(0).collapse()
pr = func_sol.sub(1).collapse()

# average velocity
v_avg = domain.comm.allreduce(assemble_scalar(form(ve[0] * dx)), op=MPI.SUM)
if rank == 0:
    print("Average velocity in x direction:", v_avg)

# write results to file
with dolfinx.io.VTXWriter(MPI.COMM_WORLD, "results_2D_square/pressure.bp", [pr], engine="BP4") as vtx:
    vtx.write(0.0)
with dolfinx.io.VTXWriter(MPI.COMM_WORLD, "results_2D_square/velocity.bp", [ve], engine="BP4") as vtx:
    vtx.write(0.0)

context at https://fenicsproject.discourse.group/t/mpc-backsubstitution-in-parallel/12401/9

Imposition of periodic constraints with geometrical dof location

Currently, create_periodic_constraint uses locate_dofs_topological. It might be interesting to provide a similar functionality with locate_dofs_geometrical. This would enable to support periodic boundary conditions on DG elements.

I can propose a PR for this. @jorgensd would you be OK in changing the API to two create_periodic_constraint_geometrical and create_periodic_constraint_topological methods instead of create_periodic_constraint ? Or do you prefer a single create_periodic_constraint with a method="topological"/"geometrical" optional keyword ?

Unable to run demo_periodic3d_topological.py out-of-the-box

Hi, I was trying to use dolfinx to implement periodic boundary conditions and came across the dolfinx_mpc package.
However, when I try to run the demo_periodic3d_topological.py example, I run into numerous errors, like default_scalar_type cannot be imported from dolfinx and the default parameters passed into create_periodic_constraint_topological() don't seem to be correct. I suspect the demo is out-of-date; is there a more updated demo that I can take a look at?
For reference, I'm running Python 3.9.9, dolfinx 0.6.0, and dolfinx_mpc 0.6.1.

fails to build with GCC-13

Hi Jorgen, Debian reports that mpc 0.5.0 fails to build with gcc 13,

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1037623

The reported error shows up in build-time tests,

[...]
FAILED test_linear_problem.py::test_pipeline[True] - TypeError: create_period...
FAILED test_linear_problem.py::test_pipeline[False] - TypeError: create_perio...
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype0-1-master_point0-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype0-1-master_point1-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype0-2-master_point0-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype0-2-master_point1-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype0-3-master_point0-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype0-3-master_point1-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype1-1-master_point0-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype1-1-master_point1-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype1-2-master_point0-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype1-2-master_point1-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype1-3-master_point0-C++]
FAILED test_matrix_assembly.py::test_mpc_assembly[celltype1-3-master_point1-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype0-1-master_point0-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype0-1-master_point1-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype0-2-master_point0-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype0-2-master_point1-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype0-3-master_point0-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype0-3-master_point1-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype1-1-master_point0-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype1-1-master_point1-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype1-2-master_point0-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype1-2-master_point1-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype1-3-master_point0-C++]
FAILED test_matrix_assembly.py::test_slave_on_same_cell[celltype1-3-master_point1-C++]
FAILED test_mpc_pipeline.py::test_pipeline[master_point0-C++] - TypeError: __...
FAILED test_mpc_pipeline.py::test_pipeline[master_point1-C++] - TypeError: __...
FAILED test_mpc_pipeline.py::test_linearproblem[master_point0] - TypeError: _...
FAILED test_mpc_pipeline.py::test_linearproblem[master_point1] - TypeError: _...
FAILED test_nonlinear_assembly.py::test_nonlinear_possion[1] - TypeError: cre...
FAILED test_nonlinear_assembly.py::test_nonlinear_possion[2] - TypeError: cre...
FAILED test_nonlinear_assembly.py::test_nonlinear_possion[3] - TypeError: cre...
FAILED test_nonlinear_assembly.py::test_homogenize[1-FiniteElement] - TypeErr...
FAILED test_nonlinear_assembly.py::test_homogenize[1-VectorElement] - TypeErr...
FAILED test_nonlinear_assembly.py::test_homogenize[2-FiniteElement] - TypeErr...
FAILED test_nonlinear_assembly.py::test_homogenize[2-VectorElement] - TypeErr...
FAILED test_nonlinear_assembly.py::test_homogenize[3-FiniteElement] - TypeErr...
FAILED test_nonlinear_assembly.py::test_homogenize[3-VectorElement] - TypeErr...
FAILED test_rectangular_assembly.py::test_mixed_element[ghost_mode0-cell_type0]
FAILED test_rectangular_assembly.py::test_mixed_element[ghost_mode0-cell_type1]
FAILED test_rectangular_assembly.py::test_mixed_element[ghost_mode1-cell_type0]
FAILED test_rectangular_assembly.py::test_mixed_element[ghost_mode1-cell_type1]
FAILED test_surface_integral.py::test_surface_integrals[C++] - TypeError: __i...
FAILED test_surface_integral.py::test_surface_integral_dependency[C++] - Type...
FAILED test_vector_assembly.py::test_mpc_assembly[celltype0-1-master_point0-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype0-1-master_point1-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype0-2-master_point0-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype0-2-master_point1-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype0-3-master_point0-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype0-3-master_point1-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype1-1-master_point0-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype1-1-master_point1-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype1-2-master_point0-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype1-2-master_point1-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype1-3-master_point0-C++]
FAILED test_vector_assembly.py::test_mpc_assembly[celltype1-3-master_point1-C++]
FAILED test_vector_poisson.py::test_vector_possion[0-0-2-4-C++] - TypeError: ...
FAILED test_vector_poisson.py::test_vector_possion[0-0-3-4-C++] - TypeError: ...
FAILED test_vector_poisson.py::test_vector_possion[0-1-2-4-C++] - TypeError: ...
FAILED test_vector_poisson.py::test_vector_possion[0-1-3-4-C++] - TypeError: ...
FAILED test_vector_poisson.py::test_vector_possion[1-0-2-4-C++] - TypeError: ...
FAILED test_vector_poisson.py::test_vector_possion[1-0-3-4-C++] - TypeError: ...
FAILED test_vector_poisson.py::test_vector_possion[1-1-2-4-C++] - TypeError: ...
FAILED test_vector_poisson.py::test_vector_possion[1-1-3-4-C++] - TypeError: ...
============================= 69 failed in 12.85s ==============================

Full log at http://qa-logs.debian.net/2023/05/22/logs/dolfinx-mpc_0.5.0.post0-2_unstable_gccexp.log

mpc version 0.5.0 and 0.6.0 are both building fine with gcc 12, see https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/dolfinx-mpc.html

The error report is for mpc 0.5.0, I haven't tested git head. I haven't tested mpc 0.6.0 with gcc 13 either.

Error with Compiling on Mac M1

Hi I'm trying to compile MPC for a Mac M1 with dolfinx 0.7.0.
It seems like I'm getting a few issues with the compilation.

The first issue is the following:
Screenshot 2023-10-30 at 11 02 31 PM

I changed then the import headers from:
Screenshot 2023-10-30 at 11 00 28 PM

To:
Screenshot 2023-10-30 at 11 03 24 PM

But then I get a new issue:
Screenshot 2023-10-30 at 11 04 24 PM

I'm thinking that because of the above issues I'm just having a problem with the exact imports / files I have. I was wondering if there were any suggestions on how to fix this.

Best,
Jorge

Allow any diagonal value for slave and dirichlet DoF

Thank you for this excellent code, which makes dolfinx even more appealing!

I open this issue to share a patch that will be useful to anyone interested in solving generalized eigenvalue problems using matrices assembled with dolfinx_mpc. I can create a pull request if you think this is worth it.

Currently, dolfinx_mpc uses a diagonal value equal to 1 for slave dof associated with multi-point constraints as well as dof associated with Dirichlet boundary conditions. This can be a limitation when solving a generalized eigenvalue problem

A * x = lambda * B * x

where both A and B are asssembled using dolfinx_mpc. Indeed, the current implementation will lead to spurious eigenfunctions (localized at slave or Dirichlet dof) associated with lambda = 1.

When assembling using dolfinx the diagonal value is an input argument, which enables to use different values for A and B. This is useful since the spurious eigenvalues are now at lambda=diagval_A/diagval_B, i.e. they can be placed far from the region of interest.

Only trivial changes are required to enable a user-defined diagonal value in dolfinx_mpc. I have been successfully using the attached patch, which modifies:
/cpp/assembly.cpp::_assemble_matrix
/python/dolfinx_mpc/assemble_matrix.py::assemble_matrix_cpp
/python/dolfinx_mpc/assemble_matrix.py::assemble_matrix
/python/dolfinx_mpc/assemble_matrix.py::add_diagonal.

Beware that this patch relies on a somewhat older version of dolfinx_mpc, specifically:
commit 5885575
Merge: dfe48c5 4c9b04c
Author: Jrgen Dokken [email protected]
Date: Fri Mar 12 14:28:52 2021 +0000

diagval.patch.txt

Possibility of periodic boundary condition for cylindrical polar coordinates

I am trying to implement a periodic boundary condition for the 3D elastic equation. The only difference compared to the existing example in the demos is that the boundaries are periodic in cylindrical polar coordinates (e.g. the slave nodes are at the same coordinates as the master nodes, but rotated through an angle). This leads to the following kind of relationship between the displacements of the periodic nodes, assuming the second node is the first node rotated about the x-axis:

$$ \begin{align} dy_2 &= a\cdot dy_1 + b\cdot dz_1 \\ dz_2 &= c\cdot dy_1 + d\cdot dz_1 \end{align} $$

The coefficients (a, b, c, d) would be fixed and set by the rotation matrix.

I was wondering if anyone has any ideas on how to implement this with dolfinx_mpc? Any help would be greatly appreciated!

Elasticity example with tying across both the boundaries

Hi @jorgensd ,

First off, thanks for the tool. I am trying to implement a simple linear elasticity example on a unit square with linear constraints tying dofs across both the sets of edges (namely left-right and top-bottom), but somehow I don't see the expected result. I modify the template provided in your elasticity example with:

  • Prescribing a DirichletBC $u_2 = 0$ on the bottom-right node (say node P)
  • Prescribing a DirichletBC $u_1 = 0$ on the top-left node (say node Q)
  • Prescribing a DirichletBC $u = 0$ at the origin (0,0) (say node o)
  • Prescribing a DirichletBC $u_1 = 0.25$ on the bottom-right node (to stretch it uniaxially along x axis).

I then constrain the dofs on the left (L) - right (R) edges such that:

u_R - u_L = u_P - u_o for all nodes except the one at top-right corner [1,1]

and the

u_T - u_B = u_Q - u_o for all nodes except the one at top-right corner [1,1]

I expect to see something like the following (independently done using multi-point constraints in Abaqus). Note that the top corner isn't properly constrained (but I should expect the field to be reasonable elsewhere at least)

trialUnitSquare

Attached is the short snippet of the code (I want to avoid reposting the entire example just for brevity)

V = VectorFunctionSpace(mesh, ("CG", 1)) 

Vx = V.sub(0).collapse()
Vy = V.sub(1).collapse()

# Define Dirichlet BC

zero = Function(V) 
zero_x = Function(Vx)
stretch_x = Function(Vx)
zero_y = Function(Vy)

with zero.vector.localForm() as zero_local:
    zero_local.set(0.0)
    
with zero_x.vector.localForm() as zero_local_x:
    zero_local_x.set(0.0)

with stretch_x.vector.localForm() as stretch_local_x:
    stretch_local_x.set(0.25)

with zero_y.vector.localForm() as zero_local_y:
    zero_local_y.set(0.0)

originDofs = locate_dofs_geometrical(V, lambda x: np.logical_and( np.isclose(x[0], 1.e-16),
                                                                  np.isclose(x[1], 1.e-16)))

botRightDofs_x = locate_dofs_geometrical((V.sub(0), Vx), lambda x: np.logical_and(np.isclose(x[0], 1.),
                                                                                  np.isclose(x[1], 0.)))

botRightDofs_y = locate_dofs_geometrical((V.sub(1), Vy), lambda x: np.logical_and(np.isclose(x[0], 1.),
                                                                                  np.isclose(x[1], 0.)))

topLeftDofs_x = locate_dofs_geometrical((V.sub(0), Vx), lambda x: np.logical_and(np.isclose(x[0], 0.),
                                                                                 np.isclose(x[1], 1.)))

topLeftDofs_y = locate_dofs_geometrical((V.sub(1), Vy), lambda x: np.logical_and(np.isclose(x[0], 0.),
                                                                                 np.isclose(x[1], 1.)))
# Applying Dirichlet BC's

bcOrigin =  DirichletBC(zero, originDofs)
rightStretch =  DirichletBC(stretch_x, botRightDofs_x, V.sub(0))
rightFixed =  DirichletBC(zero_y, botRightDofs_y, V.sub(1))
topFixed =  DirichletBC(zero_x, topLeftDofs_x, V.sub(0))

bcs = [bcOrigin, rightStretch, rightFixed, topFixed]

def slave_locater_x(i, N):
    return lambda x: dof_at(x, [1., float(i/N)])

def master_locater_x(i, N):
    return lambda x: dof_at(x, [0., float(i/N)])

def slave_locater_y(i, N):
    return lambda x: dof_at(x, [float(i/N), 1.])

def master_locater_y(i, N):
    return lambda x: dof_at(x, [float(i/N), 0.])

s_m_c = {}
# s_m_cY = {}  # to constrain the top node

# It is here that I believe the problem could be (although I am confident that I am constraining the dofs properly)
for i in range(1, N):
    s_m_c[slave_locater_x(i, N)] = {master_locater_x(i, N): 1.0,  lambda x: dof_at(x, [1., 0.]):1.0, lambda x: dof_at(x, [0., 0.]):-1.0}
    s_m_c[slave_locater_y(i, N)] = {master_locater_y(i, N): 1.0,  lambda x: dof_at(x, [0., 1.]):1.0, lambda x: dof_at(x, [0., 0.]):-1.0}

(slaves, masters, coeffs, offsets) = slave_master_structure(V, s_m_c)

mpc = MultiPointConstraint(V._cpp_object, slaves,
                            masters, coeffs, offsets)

However I am not getting displacements as expected.. Any ideas what could be going wrong ?
This is the entire file, in case if you want to run it:

dolfinx_mpc_elasticity_periodic.zip

These are the results I get from the above modified code:

u1

u2

Coefficients in contact 3D suddenly infinity

CI started failing May 17th:

  coverage run --rcfile=.coveragerc python/demos/demo_contact_3D.py --gmsh --theta 0 --timing

which has coefficients equal to inf.

Setting theta to be small (0.000001) resolves it.

Efficient mpc creation with same dofs, different coefficient

Motivation

Another feature request motivated by wave propagation: to compute a band structure for periodic media, we prescribe the wavevector $k$ through the Bloch boundary conditions, which are applied with a multipoint constraint, then solve the eigenvalue problem to obtain the eigenfrequencies. To produce the entire band structure, we iterate through a range of $k$ and repeatedly construct and solve the eigenvalue problem. Since the multipoint constraint is repeatedly applied to the same set of slave and master dofs, it is desirable to re-use the information about the slave and master dofs, and thus only need to update the coefficients of the mpc at each step of the iteration.

Requested improvement

Make MultiPointConstraint.create_periodic_constraint_topological (and create_periodic_constraint_geometrical, etc.) return the mpc_data object that is generated by the C++ layer. Then it can be saved and the mpc_data.coeffs array updated at each iteration, e.g.

mpc = MultiPointConstraint(V)
mpc_data = mpc.create_periodic_constraint_topological( ... )
for k in np.linspace(0, 1, N):
    mpc_data.coeffs[:] = k
    mpc = MultiPointConstraint(V)
    mpc.add_constraint_from_mpc_data(mpc_data)
    mpc.finalize()
    # Solve eigenvalue problem

Note: this is already possible by directly accessing the C++ wrapper layer, i.e. with

mpc_data = dolfinx_mpc.cpp.mpc.create_periodic_constraint_topological( ... )

but the suggested improvement would result in cleaner code and eliminate the need for users to directly call the C++ layer.

Missing <algorithm> in assemble_utils.cpp

Hi,

When running ninja -j3 install -C build-dir, the following error was raised:

dolfinx_mpc/cpp/assemble_utils.cpp:22:24: error: ‘find’ is not a member of ‘std’

I fixed that by adding #include <algorithm> in that file. Am I the only one who needed to add this include? Maybe it would be worth adding it for everyone.

Best,

Lucas

create_periodic_constraint_topological() produces unusual sparsity patterns

Hi Jørgen,

This may not be a bug, but I think it is worth reporting since the behavior is significantly different from that of fenics.

Issue: create_periodic_constraint_topological() produces sparsity patterns that suggest that too many nodes are influenced by the periodic boundaries.

Reproduction steps: Run the attached demo_periodic_gep.py in dokken92/dolfinx_mpc:v0.4.1 and compare A_obtained_with_dolfinx_mpc.png to A_obtained_with_fenics.png.

Background: Even though the periodic boundary condition has only 49 slave nodes, the sparsity pattern of A shows that all the nodes are affected by the periodic boundary condition. By contrast, fenics does produce the expected pattern (A_obtained_with_fenics.png).

Attachment: issue-sparsity-pattern.zip
- demo_periodic_gep.py: identical to the demo available in dokken92/dolfinx_mpc:v0.4.1 with added lines to export the sparsity pattern.
- A_obtained_with_dolfinx_mpc.png: sparsity pattern of A obtained by running demo_periodic_gep.py in dokken92/dolfinx_mpc:v0.4.1.
- A_obtained_with_fenics.png: sparsity pattern of A obtained with fenics for the same problem.

Periodic condition on mixed space

Dolfin-x commit FEniCS/dolfinx@d05ae91

dolfinx_mpc commit 25543be

I don't know how to implement a periodic boundary condition on a mixed function space. I am attaching the code that solves the Cahn-Hilliard equation on a 2D computational domain.

https://gist.github.com/Alchem334/5bfab723af41ff32130b049fb5b0c51c

An attempt to implement the boundary condition as in the examples "demo_periodic3d_topological" and "demo_periodic_geometrical" leads to a segmentation fault in the

def _create_periodic_condition(V: dolfinx.FunctionSpace, slave_blocks: np.ndarray,
relation: Callable[[np.ndarray], np.ndarray],
bcs: List[dolfinx.DirichletBC], scale: PETSc.ScalarType):
function.

Segmentation fault occurs on the code line

x = V.tabulate_dof_coordinates()

because apparently the "tabulate_dof_coordinates" function does not work with mixed space. Is there some way to get around this problem?

Error in the periodic boundary condition in the parallel case

Greetings to everyone!

I found a bug in the periodic boundary condition. The error occurs only in the parallel case.

Here is a simple test case on which an error occurs https://gist.github.com/Alchem334/3c871adca89573d411c1da56988d84e2

The transient Laplace equation is solved on a 2D square grid. Non-uniform initial data are given. In the lower left corner the function is equal to 1, in the rest of the region it is equal to 0. On the faces of the square x = 0 and x = 1, a periodic boundary condition is set.

On one core, a plausible result is obtained.

1_core

On three cores, it is noticeable that the periodic condition is not satisfied.

3_cores

This error occurs due to the MPI call to MPI_Neighbor_alltoallv

dolfinx_mpc/cpp/utils.h

Lines 235 to 240 in f7809a6

MPI_Neighbor_alltoallv(
num_masters_per_slave.data(), num_remote_slaves.data(),
remote_slave_disp_out.data(), dolfinx::MPI::mpi_type<std::int32_t>(),
recv_num_masters_per_slave.data(), num_incoming_slaves.data(),
slave_disp_in.data(), dolfinx::MPI::mpi_type<std::int32_t>(),
master_to_slave);

The vectors num_remote_slaves, remote_slave_disp_out, num_incoming_slaves, slave_disp_in can be empty on some cores in the parallel case, which causes the MPI call to silently throw error 13 MPI_ERR_UNKNOWN.

Here is the error workaround

  std::vector<int> num_remote_slaves_new = num_remote_slaves;
  std::vector<int> remote_slave_disp_out_new = remote_slave_disp_out;
  std::vector<int> slave_disp_in_new = slave_disp_in;
  std::vector<int> num_incoming_slaves_new = num_incoming_slaves;

  if(num_remote_slaves_new.size()==0) num_remote_slaves_new.resize(1);
  if(remote_slave_disp_out_new.size()==0) remote_slave_disp_out_new.resize(1);
  if(slave_disp_in_new.size()==0) slave_disp_in_new.resize(1);
  if(num_incoming_slaves_new.size()==0) num_incoming_slaves_new.resize(1);

  MPI_Neighbor_alltoallv(
      num_masters_per_slave.data(), num_remote_slaves_new.data(),
      remote_slave_disp_out_new.data(), dolfinx::MPI::mpi_type<std::int32_t>(),
      recv_num_masters_per_slave.data(), num_incoming_slaves_new.data(),
      slave_disp_in_new.data(), dolfinx::MPI::mpi_type<std::int32_t>(),
      master_to_slave);

I'm not sure if this kind of error occurs on any version of OpenMPI. The bug was found on OpenMPI-3.1.3

dolfinx_mpc on colab

Hello,
How do I install dolfinx_mpc on Google Colab? All the other packages I install using the fallowing:

!wget "https://fem-on-colab.github.io/releases/fenicsx-install-real.sh" -O "/tmp/fenicsx-install.sh" && bash "/tmp/fenicsx-install.sh"

!wget "https://fem-on-colab.github.io/releases/gmsh-install.sh" -O "/tmp/gmsh-install.sh" && bash "/tmp/gmsh-install.sh"

!apt-get install -qq xvfb

!pip install pyvista panel -q

Thank you!

ImportError when using complex numbers

Hi again,
I'd like to use dolfinx_mpc with complex numbers. However,

docker run --rm -ti  ghcr.io/jorgensd/dolfinx_mpc:v0.6.1.post0
source /usr/local/bin/dolfinx-complex-mode

python3
import dolfinx_mpc

raises

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.10/dist-packages/dolfinx_mpc/__init__.py", line 10, in <module>
    import dolfinx_mpc.cpp
ImportError: /usr/local/lib/libdolfinx_mpc.so.0.5: undefined symbol: _ZN7dolfinx2la5petsc18create_vector_wrapERKNS_6common8IndexMapEiSt4spanIKdLm18446744073709551615EE

When I manually compile dolfinx_mpc atop a dolfinx/dolfinx:latest docker image the error is almost identical (only the version number of libdolfinx_mpc.so changes):

ImportError: /usr/local/lib/libdolfinx_mpc.so.0.6: undefined symbol: _ZN7dolfinx2la5petsc18create_vector_wrapERKNS_6common8IndexMapEiSt4spanIKdLm18446744073709551615EE

Issue in parallel for periodic constraints

I get what I think to be a bug in multipoint constraint when running in parallel.

The example you find below give correct results on one processor bug wrong results with more then one processors.

You find attached an image when running with mpiexec -n 2 and the following error mpiexec -n 3

(dolfinx) inclusion_2d_elastic|main⚡ ⇒ mpiexec -n 3 python elasticity_test.py
Traceback (most recent call last):
  File "/Users/maurini/Documents/codes/FFT-FEM/fem/inclusion_2d_elastic/elasticity_test.py", line 72, in <module>
Traceback (most recent call last):
  File "/Users/maurini/Documents/codes/FFT-FEM/fem/inclusion_2d_elastic/elasticity_test.py", line 72, in <module>
    mpc.create_periodic_constraint_geometrical(V.sub(1), bottom_boundary, mapping_1, [], -1)
  File "/Users/maurini/dev/dolfinx/lib/python3.9/site-packages/dolfinx_mpc/multipointconstraint.py", line 158, in create_periodic_constraint_geometrical
    mpc.create_periodic_constraint_geometrical(V.sub(1), bottom_boundary, mapping_1, [], -1)
  File "/Users/maurini/dev/dolfinx/lib/python3.9/site-packages/dolfinx_mpc/multipointconstraint.py", line 158, in create_periodic_constraint_geometrical
    mpc_data = dolfinx_mpc.cpp.mpc.create_periodic_constraint_geometrical(
ValueError: vector
    mpc_data = dolfinx_mpc.cpp.mpc.create_periodic_constraint_geometrical(
ValueError: vector

You can find a script and a mesh to reproduce here Archive.zip. I do not get the error with a structured unit_square_mesh. The script is reproduced below. It imposed periodic bcs on a square.

import numpy as np
from pathlib import Path
from mpi4py import MPI
from dolfinx.common import Timer, list_timings, TimingType
from dolfinx.io import XDMFFile
from dolfinx.fem import (
    FunctionSpace,
    Function,
    Constant,
)
import ufl
import dolfinx.geometry
import dolfinx_mpc
from dolfinx_mpc.utils import rigid_motions_nullspace

outdir = "output"
comm = MPI.COMM_WORLD
if comm.rank == 0:
    Path(outdir).mkdir(parents=True, exist_ok=True)

with XDMFFile(comm, 'meshes/inclusion-1.20-1.20-0.6-0.6-0.030.xdmf', "r") as file:
    mesh = file.read_mesh()
    mesh.topology.create_connectivity(1, 2)
    mesh.topology.create_connectivity(0, 1)

Lx, Ly = 1.2, 1.2
element = ufl.VectorElement("Lagrange", mesh.ufl_cell(), 1)
V = FunctionSpace(mesh, element)
u = dolfinx.fem.Function(V)
u_test, u_trial = ufl.TrialFunction(V), ufl.TestFunction(V)
f = Constant(mesh, (0.0, 0.0))
eps_0 = Constant(mesh, -0.1)
Id = ufl.Identity(2)

def eps(u):
    return ufl.sym(ufl.grad(u))

def sigma(eps):
    lmbda = 0.0
    mu = 1.0 
    return 2.0 * mu * ufl.sym(eps) + lmbda * ufl.tr(eps) * Id


a_form = ufl.inner(sigma(eps(u_trial)), eps(u_test)) * ufl.dx
L_form = ufl.dot(f, u_test) * ufl.dx + ufl.inner(sigma(eps_0 * Id), eps(u_test)) * ufl.dx

def left_boundary(x):
    tol = 1e-5
    return np.logical_and(np.isclose(x[0], -Lx), np.logical_and(x[1] > - Ly - tol, x[1] < Ly - tol))

def bottom_boundary(x):
    tol = 1e-5
    return np.logical_and(np.isclose(x[1], -Ly), np.logical_and(x[0] > - Lx + tol, x[0] < Lx - tol))

def mapping_0(x):
    vals = np.zeros(x.shape)
    vals[0] = x[0] + 2 * Lx 
    vals[1] = x[1]
    vals[2] = x[2]
    return vals

def mapping_1(x):
    vals = np.zeros(x.shape)
    vals[0] = x[0]
    vals[1] = x[1] + 2 * Ly
    vals[2] = x[2]
    return vals

with Timer("~MPC: Initialize MPC"):
    mpc = dolfinx_mpc.MultiPointConstraint(V)
    mpc.create_periodic_constraint_geometrical(V.sub(0), left_boundary, mapping_0, [], -1)
    mpc.create_periodic_constraint_geometrical(V.sub(1), bottom_boundary, mapping_1, [], -1)
    mpc.finalize()

# Solve Linear problem
problem = dolfinx_mpc.LinearProblem(a_form, L_form, mpc, bcs=[], u=u)
null_space = rigid_motions_nullspace(mpc.function_space)
problem.A.setNullSpace(null_space)
problem.A.setNearNullSpace(null_space)
solver = problem.solver
with Timer("~MPC: Solve problem"):
    problem.solve()

with Timer("~Interpolate stress"):
    T = dolfinx.fem.TensorFunctionSpace(mesh, ("DG", 0), shape=(2,2))
    sigma_expr = dolfinx.fem.Expression(sigma(eps(u) - eps_0 * Id), T.element.interpolation_points)
    sigma_sol = Function(T,name="stress")
    sigma_sol.interpolate(sigma_expr)

u.name = "u"
with XDMFFile(MPI.COMM_WORLD, f"{outdir}/u_per.xdmf", "w") as file:
    file.write_mesh(mesh)
    file.write_function(u)
    file.write_function(sigma_sol)

list_timings(MPI.COMM_WORLD, [TimingType.wall])

fig
Archive.zip

read_from_msh is broken because gmsh does not return ValueError's

Trying to read a .msh file results in an error, because gmsh does not throw a ValueError here. Instead, it prints
Error : Gmsh has not been initialized, without throwing an error. This breaks the program, and causes me to have a really hard time importing a mesh made with gmsh into FEniCSx

Additionally, this line should say _gmsh.model.getCurrent().

I think this change in gmsh is also relevant https://gitlab.onelab.info/gmsh/gmsh/-/commit/e79c83ac4289fd45019f137fa2a50fb409d98763.

Complex `scale` implicitly converted to real

Motivation

The propagation of waves in periodic media is governed by the eigenvalue problem
$$L u = \omega^2 u$$
where $L$ is a linear operator that is periodic with a period defined by the unit cell of the periodic medium. The eigenfunctions satisfy the Bloch form:
$$u(x) = \hat{u}(x) e^{-i k \cdot x}$$
where $k$ is the wavevector and $\hat{u}(x)$ is periodic on the unit cell (i.e. $\hat{u}(x) = \hat{u}(x + a)$, where $a$ is a basis vector of the unit cell). Note that the eigenfunction $u(x)$ is Floquet-periodic on the unit cell, i.e. the solution on one boundary is related to the solution on the opposite boundary by $u(x) = u(x + a) e^{i k \cdot a}$. A common solution technique uses the finite element method to seek solutions $u(x)$ subject to Floquet periodic boundary conditions. For example, one might consider the unit square subject to the boundary conditions
$$u(x=1) = u(x=0) e^{-i k_1}$$
where $k_1$ is the x-component of the wavenumber. This can be represented as a multi-point constraint with slave dofs located at $x=1$, master dofs located at $x=0$, and a (complex-valued) scale of $e^{-i k_1}$.

Issue

It appears that complex-valued scales are not fully implemented in dolfinx_mpc. Consider the following MWE, heavily adapted from demo_periodic_gep.py, which attempts to apply a periodic constraint with a complex-valued scale. A warning is issued suggesting that an implicit conversion from complex to real takes place, i.e. that the imaginary part of the scale is ignored. It appears that this error arises at the cpp wrapper layer, as create_periodic_constraint_topological takes a double scale, rather than a PetscScalar scale.

WARNING:py.warnings:/usr/local/lib/python3.10/dist-packages/dolfinx_mpc/multipointconstraint.py:110: ComplexWarning: Casting complex values to real discards the imaginary part
  mpc_data = dolfinx_mpc.cpp.mpc.create_periodic_constraint_topological(

It looks like there is an intention to support this, but perhaps it was never finished? mpc_data stores coefficients using the PETSc scalar type, but all of the call signatures in PeriodicConstraint accept only reals.

MWE

import dolfinx.fem as fem
import numpy as np
from dolfinx.mesh import create_unit_square, locate_entities_boundary, meshtags
from dolfinx_mpc import MultiPointConstraint
from mpi4py import MPI

N = 1
mesh = create_unit_square(MPI.COMM_WORLD, N, N)
V = fem.FunctionSpace(mesh, ("Lagrange", 1))
fdim = mesh.topology.dim - 1

def pbc_slave_to_master_map(x):
    out_x = x.copy()
    out_x[0] = x[0] - 1
    return out_x

def pbc_is_slave(x):
    return np.isclose(x[0], 1)

def pbc_is_master(x):
    return np.isclose(x[0], 0)

# Parse boundary conditions
bcs = []

pbc_slave_tag = 2
facets = locate_entities_boundary(mesh, fdim, pbc_is_slave)
arg_sort = np.argsort(facets)
pbc_meshtags = meshtags(mesh,
                        fdim,
                        facets[arg_sort],
                        np.full(len(facets), pbc_slave_tag, dtype=np.int32))
pbc_scale = np.exp(1j * np.pi / 5) # arbitrary complex-valued scale

# Create MultiPointConstraint object
mpc = MultiPointConstraint(V)
mpc.create_periodic_constraint_topological(V, pbc_meshtags,
                                           pbc_slave_tag,
                                           pbc_slave_to_master_map,
                                           bcs,
                                           pbc_scale)

Environment

The above was run in a complex-only Docker container that I built from the following recipe:

FROM dolfinx/dev-env:nightly as dolfinx-mpc

ARG PETSC_SCALAR_TYPE=complex
ARG DOLFINX_CMAKE_BUILD_TYPE=RelWithDebInfo

WORKDIR /tmp

# Set env variables
ENV PETSC_ARCH="linux-gnu-${PETSC_SCALAR_TYPE}-32" \
    HDF5_MPI="ON" \
    CC=mpicc \
    HDF5_DIR="/usr/local"

# Install DOLFINx dependencies:
#   - basix
#   - ufl
#   - ffcx
ADD basix /tmp/basix
ADD ufl /tmp/ufl
ADD ffcx /tmp/ffcx
RUN PYBIND11_DIR="$(python3 -c 'import sys, pybind11; sys.stdout.write(pybind11.get_cmake_dir())')" && \
    # pybind11 needs to be manually located first
    cd basix && \
    cmake -G Ninja -DCMAKE_BUILD_TYPE=${DOLFINX_CMAKE_BUILD_TYPE} "-Dpybind11_DIR=${PYBIND11_DIR}" -B build-dir -S .  && \
    cmake --build build-dir && \
    cmake --install build-dir && \
    pip3 install -v -e ./python  && \
    cd ../ufl && \
    pip3 install --no-cache-dir . && \
    cd ../ffcx && \
    pip3 install --no-cache-dir .

# Install dolfinx
ADD dolfinx /tmp/dolfinx
RUN cd dolfinx && \
    mkdir -p build && \
    cd build && \
    cmake -G Ninja -DCMAKE_BUILD_TYPE=${DOLFINX_CMAKE_BUILD_TYPE} ../cpp && \
    ninja install && \
    . /usr/local/lib/dolfinx/dolfinx.conf && \
    cd ../python && \
    pip3 install --no-dependencies --no-cache-dir .

# Install dolfinx_mpc
ADD dolfinx_mpc /tmp/dolfinx_mpc
RUN cd dolfinx_mpc && \
    mkdir -p build && \
    cd build && \
    cmake -G Ninja -DCMAKE_BUILD_TYPE="${DOLFINX_CMAKE_BUILD_TYPE}" ../cpp && \
    ninja install -j4 && \
    cd .. && \
    pip3 install -v --no-deps --upgrade python/.

# Install h5py
#RUN pip3 install --no-binary=h5py h5py meshio

WORKDIR /root

Library versions:

  • basix: 1dd043332314a1633f3bdc32bb15ce31e498d892
  • UFL: fd773880195b39327f652a47a892d89f7ef200cc
  • ffcx: c007abd0753fb7a5a1f6c14ef6ff81b099666043
  • dolfinx: 66f5adfc2d4e4c32e1e50a7ac472337b591ccb28
  • dolfinx_mpc: 1a1520862e5ef2f05c0790a2ea94d2b231d01ff1

AttributeError: 'MultiPointConstraint' object has no attribute 'create_contact_slip_condition'

When I try to run the demo_contact_2D.py, I get one line of calcuations (like so):
2021-11-08 22:10:37.453 ( 1.221s) [main ] log.cpp:45 INFO| Run theta:1.05, Quad: False, Gmsh False, Res 1.00e-01

followed by this error

Traceback (most recent call last):
  File "demo_contact_2D.py", line 236, in <module>
    demo_stacked_cubes(outfile, theta=theta, gmsh=gmsh,
  File "demo_contact_2D.py", line 99, in demo_stacked_cubes
    mpc.create_contact_slip_condition(mt, 4, 9, nh)
AttributeError: 'MultiPointConstraint' object has no attribute 'create_contact_slip_condition'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.