SIMD vectorization has lately become a key challenge in high-performance computing. However, hand-written explicitly vectorized code often poses a threat to the software's sustainability. In this publication we solve this sustainability and performance portability issue by enriching the simulation framework dune-pdelab with a code generation approach. The approach is based on the well-known domain-specific language UFL, but combines it with loopy, a more powerful intermediate representation for the computational kernel. Given this flexible tool, we present and implement a new class of vectorization strategies for the assembly of Discontinuous Galerkin methods on hexahedral meshes exploiting the finite element's tensor product structure. The optimal variant from this class is chosen by the code generator through an autotuning approach. The implementation is done within the open source PDE software framework Dune and the discretization module dune-pdelab. The strength of the proposed approach is illustrated with performance measurements for DG schemes for a scalar diffusion reaction equation and the Stokes equation. In our measurements, we utilize both the AVX2 and the AVX512 instruction set, achieving 40% to 60% of the machine's theoretical peak performance for one matrix-free application of the operator. CCS Concepts: • Computing methodologies → Massively parallel and high-performance simulations; • Software and its engineering → Source code generation; Software performance; Software usability; Domain specific languages; • Mathematics of computing → Partial differential equations;
SIMD vectorization has lately become a key challenge in high-performance computing. However, hand-written explicitly vectorized code often poses a threat to the software’s sustainability. In this publication, we solve this sustainability and performance portability issue by enriching the simulation framework dune-pdelab with a code generation approach. The approach is based on the well-known domain-specific language UFL but combines it with loopy, a more powerful intermediate representation for the computational kernel. Given this flexible tool, we present and implement a new class of vectorization strategies for the assembly of Discontinuous Galerkin methods on hexahedral meshes exploiting the finite element’s tensor product structure. The performance-optimal variant from this class is chosen by the code generator through an auto-tuning approach. The implementation is done within the open source PDE software framework Dune and the discretization module dune-pdelab. The strength of the proposed approach is illustrated with performance measurements for DG schemes for a scalar diffusion reaction equation and the Stokes equation. In our measurements, we utilize both the AVX2 and the AVX512 instruction set, achieving 30% to 40% of the machine’s theoretical peak performance for one matrix-free application of the operator.
This paper presents the basic concepts and the module structure of the Distributed and Unified Numerics Environment and reflects on recent developments and general changes that happened since the release of the first Dune version in 2007 and the main papers describing that state [1,2]. This discussion is accompanied with a description of various advanced features, such as coupling of domains and cut cells, grid modifications such as adaptation and moving domains, high order discretizations and node level performance, non-smooth multigrid methods, and multiscale methods. A brief discussion on current and future development directions of the framework concludes the paper.
LiDAR data have become indispensable for research in archaeology and a variety of other topographic applications. To derive products (e.g. digital terrain or feature models, individual trees, buildings), the 3D LiDAR points representing the desired objects of interest within the acquired and georeferenced point cloud need to be identified. This process is known as classification, where each individual point is assigned to an object class. In archaeological prospection, classification focuses on identifying the object class 'ground points'. These are used to interpolate digital terrain models exposing the microtopography of a terrain to be able to identify and map archaeological and palaeoenvironmental features. Setting up such classification workflows can be time-consuming and prone to information loss, especially in geographically heterogeneous landscapes. In such landscapes, one classification setting can lead to qualitatively very different results, depending on varying terrain parameters such as steepness or vegetation density. In this paper, we are focussing on a special workflow for optimal classification results in these heterogeneous environments, which integrates expert knowledge. We present a novel Pythonbased open-source software solution, which helps to optimize this process and creates a single digital terrain model by an adaptive classification based on spatial segments. The advantage of this approach for archaeology is to produce coherent digital terrain models even in geomorphologically heterogenous areas or areas with patchy vegetation. The software is also useful to study the effects of different algorithm and parameter combinations on digital terrain modelling with a focus on a practical and time-saving implementation. As the developed pipelines and all meta-information are made available with the resulting data set, classification is white boxed and consequently scientifically comprehensible and repeatable.Together with the software's ability to simplify classification workflows significantly, it will be of interest for many applications also beyond the examples shown from archaeology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.