2022
DOI: 10.1038/s41598-022-11058-2
|View full text |Cite
|
Sign up to set email alerts
|

Physics-informed attention-based neural network for hyperbolic partial differential equations: application to the Buckley–Leverett problem

Abstract: Physics-informed neural networks (PINNs) have enabled significant improvements in modelling physical processes described by partial differential equations (PDEs) and are in principle capable of modeling a large variety of differential equations. PINNs are based on simple architectures, and learn the behavior of complex physical systems by optimizing the network parameters to minimize the residual of the underlying PDE. Current network architectures share some of the limitations of classical numerical discretiz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(14 citation statements)
references
References 51 publications
0
14
0
Order By: Relevance
“…Possible extensions of this work are related to the investigation of more advanced neural networks architectures to improve the method accuracy and efficiency [32,33] or to more complex problems. Indeed, neural networks are known to be able to manage very highdimensional problems, overcoming the so-called curse of dimensionality, therefore we expect them to be able to efficiently solve parametric PDEs with multiple parameters [33] or highdimensional PDEs [4,34].…”
Section: Discussionmentioning
confidence: 99%
“…Possible extensions of this work are related to the investigation of more advanced neural networks architectures to improve the method accuracy and efficiency [32,33] or to more complex problems. Indeed, neural networks are known to be able to manage very highdimensional problems, overcoming the so-called curse of dimensionality, therefore we expect them to be able to efficiently solve parametric PDEs with multiple parameters [33] or highdimensional PDEs [4,34].…”
Section: Discussionmentioning
confidence: 99%
“…Increased observations or collocation points along the shock trajectories in the training of the neural network forms another approach [78]. However, one challenge in this approach would be identifying the shock location.…”
Section: Discussion On Empirical Resultsmentioning
confidence: 99%
“…As of 2022, the Generative Pre-trained Transformer 3 (GPT-3) model [226], which is based on the Transformer architecture, belongs to the most powerful language models. GPT-3 is an autoregressive model that produces text from a given (initial) text prompt, whereby it can deal with different tasks as translation, question-answering, cloze-tests 228 and word-unscrambling, for instance. The impressive capabilities of GPT-3 are enabled by its huge capacity of 175 billion parameters, which is 10 times more than preceding language models.…”
Section: )mentioning
confidence: 99%
“…Attention mechanism, kernel machines, physics-informed neural networks (PINNs). In [227], a new attention architecture (mechanism) was proposed by using kernel machines discussed in Section 8, whereas in [228], the gated recurrent units (GRU, Section 7.3) and the attention mechanism (Section 7.4.1) were used in conjunction with Physics-Informed Neural Networks (PINNs, Section 9.5) to solve hyperbolic problems with shock waves; Remark 9.5 and Remark 11.11.…”
Section: )mentioning
confidence: 99%
See 1 more Smart Citation