2023
DOI: 10.48550/arxiv.2302.14376
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GNOT: A General Neural Operator Transformer for Operator Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Simultaneously, researchers are exploring the combination of FNO with the attention mechanisms [3] for irregular meshes. This includes the Operator Transformer (OFormer) [39], Mesh-Independent Neural Operator (MINO) [40], and the General Neural Operator Transformer (GNOT) [41]. Besides, the Clifford neural layers [42] use the Clifford algebra to compute multivectors, which provides Clifford-FNO implementations as an extension of FNO.…”
Section: Related Workmentioning
confidence: 99%
“…Simultaneously, researchers are exploring the combination of FNO with the attention mechanisms [3] for irregular meshes. This includes the Operator Transformer (OFormer) [39], Mesh-Independent Neural Operator (MINO) [40], and the General Neural Operator Transformer (GNOT) [41]. Besides, the Clifford neural layers [42] use the Clifford algebra to compute multivectors, which provides Clifford-FNO implementations as an extension of FNO.…”
Section: Related Workmentioning
confidence: 99%
“…(Cao 2021) removes the softmax normalization and introduces Galerkin-type attention to achieve linear scaling. Although efficient transformers (Liu, Xu, and Zhang 2022;Li, Meidani, and Farimani 2022;Hao et al 2023) have been proposed with preserving permutation symmetries (Lee 2022), the demand for flexibility and scalability is still not met for practical use in the real world. Inducing-Point Methods.…”
Section: Related Workmentioning
confidence: 99%
“…Its success has been extended to other areas, including computer vision tasks [21] and biology [22]. It has also inspired a wide array of scientific applications, in particular PDEs modeling [23][24][25][26][27][28][29][30]. Kovachki et al [16] proposes a kernel integral interpretation of attention.…”
Section: Introductionmentioning
confidence: 99%
“…Learning operators with coupled attention [32] uses attention weights to learn correlations in the output domain and enables sample-efficient training of the model. General neural operator transformer for operator learning [25] proposes a heterogeneous attention architecture that stacks multiple cross-attention layers and uses a geometric gating mechanism to adaptively aggregate features from query points. Additionally, encoding physics-informed inductive biases has also been of great interest because it allows incorporatation of additional system knowledge, making the learning task easier.…”
Section: Introductionmentioning
confidence: 99%