Peano
Boundary conditions

Boundary conditions often are tricky to implement and solely up to the user.

In , Dirichlet and Neumann boundary conditions on Cartesian meshes are close to trivial. More sophsticated boundary conditions require slightly more work. Here are some remarks how to implement them.

The boundary condition semantics are solely determined by the user's boundary condition callback. Therefore, it is possible to use different boundary conditions in different domain areas.

![ 's Finite Volume scheme organises its mesh into patches. Here, we use a 5$\times$5 patch (orange/red). Prior to the update of a patch,  calls boundaryConditions for each of the volumes adjacent to the boundary (darker red). The routine allows you to alter the value of the halo surrounding the patch (blue). The outer (blue) cells are subsequentially used to determine a flow through the boundary into the domain. [[figure:exahype:boundary-conditions]]{#figure:exahype:boundary-conditions label="figure:exahype:boundary-conditions"} ](documentation/exahype2/boundary.pdf){#figure:exahype:boundary-conditions width="20%"}

Boundary conditions in general are realised via their own callback called boundaryConditions. When you run the Python toolkit, you get an empty implementation of this routine that you can befill. The routine is called per outer volume (Figure 1.1{reference-type="ref" reference="figure:exahype:boundary-conditions"}) or per degree of freedom in the Discontinuous Galerkin schemes.

Boundary conditions on unit box for Finite Volume schemes

Dirichlet boundary conditions

Dirichlet boundary conditions $$Q(x) | _{\partial \Omega } = f(x,t)$$ are close to trivial to implement. You set outer values of the halo layer due to

void ...::MySolver::boundaryConditions( ... ) Qoutside[0] = ... Qoutside[1] = ... ...

Obviously, the values on the right-hand side can depend on time and spatial position. We find two different ways to impose Dirichlet conditions:

  1. You simple set the outer volumes (blue in Figure 1.1{reference-type="ref" reference="figure:exahype:boundary-conditions"}) to $f(x,t)$.
  2. You compute the average between the blue value and the interior value as $Q | _{\partial \Omega } = \frac{1}{2} (Q_{\text{outside}} + Q_{\text{inside}}) = f(x,t)$ which yields $Q_{\text{outside}} = 2f(x,t)-Q_{\text{inside}}$.

The latter formula is inspired by Finite Differences and might give a slightly more accurate values. With a sole averaging Riemann solver, it directly sets the flux. If you use something like Rusanov, you might tweak $Q_{\text{outside}}$ further such that the eigenvalue term along the boundary face has the correct (imposed) value too.

Neumann boundary conditions

As we can read the volume values inside the domain prior to a time step and modify those outside of the domain, it is straightforward to implement homogeneous Neumann conditions. The snippet below implements homogeneous Neumann conditions

$$\left( \nabla Q(x), n \right) | _{\partial \Omega } = 0$$

void ...::MySolver::boundaryConditions( ... ) Qoutside[0] = Qinside[0]; Qoutside[1] = Qinside[1]; ...

but obviously you can use a simple Finite Differences scheme to impose non-homogeneous conditions as well.

Inflow/outflow conditions

Inflow or outflow boundary conditions are conceptually close to Neumann conditions. Often, it is however reasonable to supplement them with some limiter, i.e. if the flow is close to zero and inwards, a modified outflow condition could manually set it (limit it) to zero.

While this approach does work in most of the cases, it struggles to handle information flow that is not orthogonal to the Cartesian faces. It assumes that all flow is axis-aligned: You can only modify the face-connected neighbour volume in the ghost region outside of the domain. You cannot alter diagonal volumes. Unless you want to use distorted, i.e. boundary-aligned, meshes as discussed below, there's no way around this. Yet, increasing the domain size usually helps in such a case.

More complex boundaries

Extrapolating boundary data

It is more complicated to realise boundary conditions that extrapolate data into the outside. Such boundary conditions read as $$Q(x) | _{\partial \Omega } = \lim _{x \mapsto \partial \Omega} Bnd(Q(x,t),\nabla f(Q(x,t)) \ \text{or} \ \left( \nabla Q(x), n \right) | _{\partial \Omega } = \lim _{x \mapsto \partial \Omega} Bnd(Q(x,t),\nabla f(Q(x,t)).$$ It is obviously trivial to extrapolate the solution value within the domain to the boundary volumes (dark red to blue value in Figure 1.1{reference-type="ref" reference="figure:exahype:boundary-conditions"}), but once you need the derivatives at the boundaries, you have to invest more work. After all, the boundary routine only provides access to the boundary data, but not to any derivatives.

Users have successfully implemented non-homogeneous Neumann boundary conditions where they extrapolate the derivatives onto the boundary and use those to inform the actual value. This requires work however:

  • Introduce a new material parameter that stores the directional derivative. If you need the non-homogeneous Neumann conditions all around the domain, you will need DIMENSIONS material parameter per unknown:
  • Compute the derivatives per volume after each time step in a Finite Differences sense. We can add such a code snippet in Python:
  • With this information, we can finally set the boundary conditions: The routine boundaryConditions is given the data at the boundary. Due to our modifications, these data are augmented by additional "material" parameters[^2] which store the derivatives from the current time step. We can thus realise boundary conditions which use the (boundary-extrapolated) derivative besides the actual value.

Boundary-aligned non-cubic geometries

's mesh is topologically Cartesian. That does not mean that you have to work in a Cartesian world. You can distort, squeeze, rotate the mesh locally as long as your mesh topology remains adaptive Cartesian. Codes like the ExaSeis simulation package introduce material parameters per degree of freedom to encode mesh distortions. This allows them to represent a boundary exactly, i.e. to adopt the Cartesian mesh to the real domain boundaries. Obviously, the actual boundary condition implementations then have to be adopted accordingly, too.

Distinguishing different boundary types

Some solvers require different boundary conditions in different areas. We distinguish two different paradigms to realise this:

  1. compute boundary type on-the-fly;
  2. map boundary type onto marker.

Each boundary condition callback is supplemented with spatial data (aka geometry) and time step. These two ingredients allow you to compute the boundary type on the fly: Whenever the boundary data routine is invoked, you read out the coordinate of the boundary face, and then you decide which boundary condition to apply. After all, the realisation of the boundary is totally up to you in the call back, i.e. you can have multiple ifs or a case distinction via a switch.

If these case distinctions are something you want to avoid—as the geometry checks are too expensive or for other reasons—you can introduce a boundary marker.  does not have genuine markers, but it does have material parameters. Add an additional material parameter in your solver setup, and initialise this parameter with an integer constant which represents your boundary type. When the actual boundary routine is invoked, the inner degree of freedom will contain this material parameter, i.e. you can read it out and make your boundary realisation depend upon its value.

A complete swap out of boundary conditions

So far, all boundary conditions have relies on a call-back principle: The user supplements an implementation of the (point-wise) boundary data behaviour, and there is some generic kernel which runs over the boundary and calls this implementation. This means,  has to anticipate what kind of information the user might need when they implement their boundary data.

The current interface for boundary conditions is more or less tailored towards stationary boundary data. As soon as you need some time-dependent behaviour on the boundary such as ODEs (Sommerfeld boundary conditions, e.g.), you might be well-advised to swap out the whole boundary implementation.

Every  solver has an attribute _action_set_handle_boundary. This is the action set that we invoke to handle the boundary. It has an attribute TemplateHandleBoundary_KernelCalls which is a string in jinja2 format. It is used to actually generate the kernel call. By default, it invokes the applyBoundaryConditions function of your solver. You can redirect this call:

The start of the attribute with an underscore emphasises that I usually do not expect users to alter the action set. This is for experienced users. Most of them will create their own subclass of the solver. In this case, an attribute with an underscore make sense.

 comes along with a limited set of pre-manufactured boundary conditions. What is available in which content depends strongly on the solver type used. Consult the documentation of the HandleBoundary.h file of your solver type.