2021
DOI: 10.48550/arxiv.2108.08481
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Operator: Learning Maps Between Function Spaces

Abstract: The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces. We formulate the approximation of operators by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators. We prove a universal approxi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
138
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 94 publications
(144 citation statements)
references
References 44 publications
1
138
0
Order By: Relevance
“…al. [43], and thus also extends to Neural Operators via the correspondence between Neural Operators and (finite-dimensional) convolutional neural networks [46].…”
Section: Neural Operatorsmentioning
confidence: 96%
See 4 more Smart Citations
“…al. [43], and thus also extends to Neural Operators via the correspondence between Neural Operators and (finite-dimensional) convolutional neural networks [46].…”
Section: Neural Operatorsmentioning
confidence: 96%
“…al. [46], where it is shown that a particular choice of neural operator architecture produces a DeepONet with an arbitrary trunk network and a branch network of a certain form. In particular, a Neural Operator layer has the form,…”
Section: Neural Operatorsmentioning
confidence: 99%
See 3 more Smart Citations