2022
DOI: 10.1007/s11565-022-00441-6
|View full text |Cite
|
Sign up to set email alerts
|

Solving PDEs by variational physics-informed neural networks: an a posteriori error analysis

Abstract: We consider the discretization of elliptic boundary-value problems by variational physics-informed neural networks (VPINNs), in which test functions are continuous, piecewise linear functions on a triangulation of the domain. We define an a posteriori error estimator, made of a residual-type term, a loss-function term, and data oscillation terms. We prove that the estimator is both reliable and efficient in controlling the energy norm of the error between the exact and VPINN solutions. Numerical results are in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…This implies that there exist other sources of error that dominate and that a very small loss function does not ensure a very accurate solution; this phenomenon is also observed in Fig. 3 of [45] and is discussed in greater detail therein.…”
Section: Numerical Resultsmentioning
confidence: 92%
“…This implies that there exist other sources of error that dominate and that a very small loss function does not ensure a very accurate solution; this phenomenon is also observed in Fig. 3 of [45] and is discussed in greater detail therein.…”
Section: Numerical Resultsmentioning
confidence: 92%
“…A typical definition for the loss functional in (12) found in the literature is the following (see, e.g., [31,33,5,6]):…”
Section: Variational Physics-informed Neural Networkmentioning
confidence: 99%
“…where (• , •) 0 denotes the L 2 -inner product 5 , and ⟨• , •⟩ denotes the duality map between V ′ and V , coinciding with ( f , v) 0 when f belongs to L 2 (Ω).…”
Section: Diffusion-advection Model Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, when α > L 1 , where L 1 is the Lipschitz constant of j ′ 1 (•), then γ = α − L 1 20. There are some works on a posteriori error analysis for neural networks approximations; see, e.g.,[63].…”
mentioning
confidence: 99%