![]() |
Peano
|
There are different flavours (implementations) how to administer and store particle sets.
The particle set flavours can be controlled via the generator object within peano4.toolbox.particles.ParticleSet. Consult the documentation there. Each particle set, from the outside, is a collection of pointers to the real particles. However the actual realisation of the sets internally can differ.
The signature design of the set flavours is guided by a few simple principles. They are the same for each flavour. Internally, we have to major realisations:
Both of these inherit from toolbox::particles::ParticleSet. As the exact types per application result from an instantiation of these jinja templates, they cannot be parsed by doxygen. However, you can browse any application code spitting them out for docu (the doxygen template holds info), and we summarise their common interface here:
Analogous to a C++ standard container, they offer
The operation clear() on a particle set becomes a shallow clear, as the set administers pointers to particles. The indices are destroyed, the particles themselves remain there. If another stack references the same particles, you neither have a leak nor will you induce an invalid memory access.
The routine deleteParticle() allows you to actually delete a particle (and to remove it from the list). This routine has to be handled with care. If someone else holds a pointer to the underlying particle, this pointer all at a sudden becomes invalid without any further notification.
deleteParticles() is a shortcut to delete all particles associated with a vertex/stack entry.
Routines such as toString() are offered to faciliate Logging, tracing, statistics, profiling and assertions.
control if particles are to be sent/received or held in-between iterations.
Further to that, the routine mergeWithParticle() defines how to particles held on different subdomains are merged into each other. We receive particle sets from other trees, as well as from sieve sets. These are sent out after particles have been touched for the last time from their previous "owner". mergeWithParticle() merges an incoming particle set from neighbour into the local set, i.e. we run through neighbour and assess what to do with the particles therein. Basically, we look at its state and then decide whether to enqueue the particle here, too. Furthermore, we do some state switches. If a particle is local on another rank for example, it has to be virtual here.
The merge works with shallow copies. We assume that all necessary real copying has been done before. See peano4::parallel::SpacetreeSet::exchangeAllHorizontalDataExchangeStacks() for a discussion of deep copying in this context. If we decide not to insert an incoming particle into our local data structure, we have to destroy it, as the containers for boundary data exchange perform deep copies.
I started with an implementation which first inserts all the particles into the vertex and then eliminates those guys which are redundant. This turns out to be a slow version. It is way faster to find those guys which are to be added and then to check prior to the insert if they really should be added or not.
The particles type is associated with toolbox::particles::ParticleSet. The latter holds routines how to manage multiscale transitions along the lines of Sorting: Preserving the mesh-particle data consistency.
The routine eliminateSpatialDuplicates() has to be offered by each vertex set. Run over set and removes one of the two particles that reside at same location.
Such an eliminate of duplicates can be motivated through the merger, where it can happen that multiple ranks tell us about a particle (some of them have virtual copies, some have local ones), as the association of particles to trees is not unique. After all, we work with floating point arithmetics. Those are the bad guys. Keeping redundant particles will introduce wrong physics (doubled forces for example) in many cases, or even lead to crashes (division by zero).
This elimination means that I actually delete the particle held by particleSet. Data is actually freed.
Please consult the discussion on the merger below for some more explanations where and why we need this elimination routine.
The operation clone() is a deep clone. We get data from somewhere else and create our own version with which we can work from hereon. Besides the particles, we also copy (aka clone) the debug x and h values. The clone itself is mechanical. The actual logic how to keep different cloned stacks consistent is all buried within merge().
The copy constructor is a shallow copy constructor.