The discontinuous Petrov-Galerkin method is a minimal residual method with broken test spaces and is introduced for a nonlinear model problem in this paper. Its lowest-order version applies to a nonlinear uniformly convex model example and is equivalently characterized as a mixed formulation, a reduced formulation, and a weighted nonlinear least-squares method. Quasi-optimal a priori and reliable and efficient a posteriori estimates are obtained for the abstract nonlinear dPG framework for the approximation of a regular solution. The variational model example allows for a built-in guaranteed error control despite inexact solve. The subtle uniqueness of discrete minimizers is monitored in numerical examples. Classification (2000) 47H05,49M15,65N12,65N15,65N30 Reliable simulation techniques in solid mechanics. Development of non-standard discretization methods, mechanical and mathematical analysis' under the project 'Foundation and application of generalized mixed FEM towards nonlinear problems in solid mechanics' (CA
Mathematics Subject
The pseudostress approximation of the Stokes equations rewrites the stationary Stokes equations with pure (but possibly inhomogeneous) Dirichlet boundary conditions as another (equivalent) mixed scheme based on a stress in H(div) and the velocity in L 2 . Any standard mixed finite element function space can be utilized for this mixed formulation, e.g., the Raviart-Thomas discretization which is related to the Crouzeix-Raviart nonconforming finite element scheme in the lowest-order case. The effective and guaranteed a posteriori error control for this nonconforming velocity-oriented discretization can be generalized to the error control of some piecewise quadratic velocity approximation that is related to the discrete pseudostress. The analysis allows for local inf-sup constants which can be chosen in a global partition to improve the estimation. Numerical examples provide strong evidence for an effective and guaranteed error control with very small overestimation factors even for domains with large anisotropy.
The discontinuous Petrov-Galerkin (dPG) method is a minimum residual method with broken test functions for instant stability. The methodology is written in an abstract framework with product spaces. It is applied to the Poisson model problem, the Stokes equation, and linear elasticity with low-order discretizations. The computable residuum leads to guaranteed error bounds and motivates adaptive refinements.
FrameworkThe dPG paradigm suggests some spatial decomposition of test functions in a framework of a minimal residual method with a partition T , the Hilbert space Y := K∈T Y (K) and the seminormed vector spaceX := K∈T X(K). Suppose that the bounded bilinear form b :X × Y → R is nondegenerate for all elements of the closed normed ansatz space X ⊂X in the sense thatGiven F ∈ Y * and subspaces X h ⊆ X, Y h ⊆ Y , the dPG method approximates the solution u ∈ X to the variational problemIf the discrete inf-sup condition holds or, equivalently [2], if there exists a linear bounded projector P : Y → Y onto Y h with norm P such that the annulation property b(x h , y − P y) = 0 holds for all x h ∈ X h , y ∈ Y , then the mixed problem (M h ) is well-posed and best-approximation holds with [3]Furthermore, the annulation operator P leads to the efficient and reliable a posteriori error control [4]
Towards AdaptivityThe recent progress in the analysis of minimal residual methods with adaptive mesh refinement might lead to the quasi-optimal convergence of adaptive dPG methods as well. The related least-squares FEMs seek discrete minimizers of a least-squares functional LS(f ; •) whose element-wise evaluation yields a reliable and efficient built-in a posteriori error estimator. The plain convergence of this natural adaptive least-squares FEM is proven by [5]. The standard techniques for quasi-optimal convergence proofs [6-8] cannot be applied in this context due to the lack of the reduction property of the minimal residual functional and an additional data approximation term which needs to be reduced. As a remedy, a separate marking algorithm can guarantee the reduction of an alternative a posteriori error η(T , •) and the data approximation error f − Π 0 f L 2 (Ω) with the piecewise constant L 2 best-approximation Π 0 f of f ∈ L 2 (Ω) and enables the proof of quasi-optimal convergence [9]. This result for the Poisson model problem is generalized to the Stokes equations [10] and linear elasticity [11]. All these proofs base on the framework of the axioms of adaptivity [8] which is generalized to separate marking algorithms by [12].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.