2021
DOI: 10.48550/arxiv.2110.13297
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast PDE-constrained optimization via self-supervised operator learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 43 publications
0
6
0
Order By: Relevance
“…The trunk network is composed of 5 hidden layers with 300 neurons in each layer and 150 neurons in the output layer. The loss function for known data is similar to (25). Taken into account the physics informed MOONet and α = 1, the corresponding loss function is expressed as follows…”
Section: P R E P R I N T N O T P E E R R E V I E W E Dmentioning
confidence: 99%
See 1 more Smart Citation
“…The trunk network is composed of 5 hidden layers with 300 neurons in each layer and 150 neurons in the output layer. The loss function for known data is similar to (25). Taken into account the physics informed MOONet and α = 1, the corresponding loss function is expressed as follows…”
Section: P R E P R I N T N O T P E E R R E V I E W E Dmentioning
confidence: 99%
“…Barry-Straume et al use a two-stage framework to solve PDE-constrained optimization problems [24]. Wang et al use physics-informed deep operator networks (DeepONets) framework to learn the solution operator of parametric PDEs, which builds a surrogate for solving PDE-constrained optimization problems [25].…”
Section: Introductionmentioning
confidence: 99%
“…We note that the same neural operator can also be deployed to accelerate Bayesian inverse problems governed by the same model but defined by possibly a variety of different noise models, observation operators, or data, which leads to additional computational cost reduction via the amortization of the model deployment in many different problems. Neural operators have also observed success in their deployment as surrogates for accelerating so-called "outer-loop" problems, such as inverse problems [36], Bayesian optimal experimental design [72], PDE-constrained optimization [73], etc., where models governed by PDEs need to be solved repeatedly at different samples of input variables.…”
Section: Operator Learning With Neural Networkmentioning
confidence: 99%
“…The approximation quality also serves as the stopping criterion of the algorithm. Another approach involving physics-informed deep operator networks to accelerate PDE-constrained optimization in a self-supervised manner has recently been suggested in [33].…”
Section: Introductionmentioning
confidence: 99%