2022
DOI: 10.48550/arxiv.2205.00593
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PFNN-2: A Domain Decomposed Penalty-Free Neural Network Method for Solving Partial Differential Equations

Abstract: A new penalty-free neural network method, PFNN-2, is presented for solving partial differential equations, which is a subsequent improvement of our previously proposed PFNN method [1]. PFNN-2 inherits all advantages of PFNN in handling the smoothness constraints and essential boundary conditions of self-adjoint problems with complex geometries, and extends the application to a broader range of non-self-adjoint time-dependent differential equations. In addition, PFNN-2 introduces an overlapping domain decomposi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(11 citation statements)
references
References 60 publications
0
11
0
Order By: Relevance
“…Apart from these two widely-used methods, the weak adversarial network [48] is based on the weak form of (2.1), while another series of neural network methods is designed to use separate networks to fit the interior and boundary equations respectively [34,3]. We refer the readers to [23,42,17] for a more detailed review of the existing deep learning solvers. Notably, with the interface conditions being included as soft constraints in the training loss function and the set of interface points being small compare to that of interior domains, the trained network is highly prone to overfitting at the interface [9,1], which is a key threat to the integration of the direct flux exchange schemes and the deep learning techniques but rarely studied or addressed in the literature.…”
Section: Algorithm 21 Domain Decomposition Methods Based On Solution ...mentioning
confidence: 99%
See 1 more Smart Citation
“…Apart from these two widely-used methods, the weak adversarial network [48] is based on the weak form of (2.1), while another series of neural network methods is designed to use separate networks to fit the interior and boundary equations respectively [34,3]. We refer the readers to [23,42,17] for a more detailed review of the existing deep learning solvers. Notably, with the interface conditions being included as soft constraints in the training loss function and the set of interface points being small compare to that of interior domains, the trained network is highly prone to overfitting at the interface [9,1], which is a key threat to the integration of the direct flux exchange schemes and the deep learning techniques but rarely studied or addressed in the literature.…”
Section: Algorithm 21 Domain Decomposition Methods Based On Solution ...mentioning
confidence: 99%
“…On the other hand, as the classical domain decomposition methods [45] can be formulated at the continuous or the weak level, various works have been devoted to employing the learning approaches for solving the decomposed subproblems and therefore benefit from the mesh free nature of deep learning solvers. Under such circumstances, the machine learning analogue of the overlapping domain decomposition methods have emerged recently and have successfully handled many problems [32,29,35,42], however, the more general non-overlapping counterpart has not been thoroughly studied yet. A major difficulty is that the network solutions of local problems are prone to overfitting at and near the interface [9,1] since the interface conditions are enforced as soft penalty functions during training and the size of training data on interface is smaller than that of interior domains, which eventually propagates the errors to neighboring subdomains and hampers the convergence of outer iteration.…”
mentioning
confidence: 99%
“…To meet the homogeneous boundary condition (i.e., let φ k i, j (x) = 0 in ( 10)), a loss function with the norm of the auxiliary solution restricted on interfaces is introduced in [27]. As finding the optimal weights to balance the terms for boundary conditions and the governing PDEs remains an open challenging problem, we apply the boundary treatment method proposed in [50,9,33] as follows.…”
Section: Interface Treatmentsmentioning
confidence: 99%
“…In [31], Dong and Li introduce local extreme learning machines with local field solutions represented by feed-forward neural networks. Besides, Moseley et al [32] propose finite basis physics-informed neural networks with separate input normalization over subdomains, and Sheng and Yang [33] develop a penalty-free neural network method based on domain decomposition.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation