The discontinuous Petrov-Galerkin (dPG) method is a minimum residual method with broken test functions for instant stability. The methodology is written in an abstract framework with product spaces. It is applied to the Poisson model problem, the Stokes equation, and linear elasticity with low-order discretizations. The computable residuum leads to guaranteed error bounds and motivates adaptive refinements.
FrameworkThe dPG paradigm suggests some spatial decomposition of test functions in a framework of a minimal residual method with a partition T , the Hilbert space Y := K∈T Y (K) and the seminormed vector spaceX := K∈T X(K). Suppose that the bounded bilinear form b :X × Y → R is nondegenerate for all elements of the closed normed ansatz space X ⊂X in the sense thatGiven F ∈ Y * and subspaces X h ⊆ X, Y h ⊆ Y , the dPG method approximates the solution u ∈ X to the variational problemIf the discrete inf-sup condition holds or, equivalently [2], if there exists a linear bounded projector P : Y → Y onto Y h with norm P such that the annulation property b(x h , y − P y) = 0 holds for all x h ∈ X h , y ∈ Y , then the mixed problem (M h ) is well-posed and best-approximation holds with [3]Furthermore, the annulation operator P leads to the efficient and reliable a posteriori error control [4]
Towards AdaptivityThe recent progress in the analysis of minimal residual methods with adaptive mesh refinement might lead to the quasi-optimal convergence of adaptive dPG methods as well. The related least-squares FEMs seek discrete minimizers of a least-squares functional LS(f ; •) whose element-wise evaluation yields a reliable and efficient built-in a posteriori error estimator. The plain convergence of this natural adaptive least-squares FEM is proven by [5]. The standard techniques for quasi-optimal convergence proofs [6-8] cannot be applied in this context due to the lack of the reduction property of the minimal residual functional and an additional data approximation term which needs to be reduced. As a remedy, a separate marking algorithm can guarantee the reduction of an alternative a posteriori error η(T , •) and the data approximation error f − Π 0 f L 2 (Ω) with the piecewise constant L 2 best-approximation Π 0 f of f ∈ L 2 (Ω) and enables the proof of quasi-optimal convergence [9]. This result for the Poisson model problem is generalized to the Stokes equations [10] and linear elasticity [11]. All these proofs base on the framework of the axioms of adaptivity [8] which is generalized to separate marking algorithms by [12].