We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature.
We develop a general framework for data-driven approximation of input-output maps between infinitedimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. We also include numerical experiments which demonstrate the effectiveness of the method, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare it with existing algorithms from the literature; our examples include the mapping from coefficient to solution in a divergence form elliptic partial differential equation (PDE) problem, and the solution operator for viscous Burgers' equation.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
In this article we consider regularizations of the Dirac delta distribution with applications to prototypical elliptic and hyperbolic partial differential equations (PDEs). We study the convergence of a sequence of distributions S H to a singular term S as a parameter H (associated with the support size of S H ) shrinks to zero. We characterize this convergence in both the weak- * topology of distributions, as well as in a weighted Sobolev norm. These notions motivate a framework for constructing regularizations of the delta distribution that includes a large class of existing methods in the literature. This framework allows different regularizations to be compared. The convergence of solutions of PDEs with these regularized source terms is then studied in various topologies such as pointwise convergence on a deleted neighborhood and weighted Sobolev norms. We also examine the lack of symmetry in tensor product regularizations and effects of dissipative error in hyperbolic problems.
We present a new class of prior measures in connection to p regularization techniques when p ∈ (0, 1) which is based on the generalized Gamma distribution. We show that the resulting prior measure is heavy-tailed, non-convex and infinitely divisible. Motivated by this observation we discuss the class of infinitely divisible prior measures and draw a connection between their tail behavior and the tail behavior of their Lévy measures. Next, we use the laws of pure jump Lévy processes in order to define new classes of prior measures that are concentrated on the space of functions with bounded variation. These priors serve as an alternative to the classic total variation prior and result in well-defined inverse problems. We then study the well-posedness of Bayesian inverse problems in a general enough setting that encompasses the above mentioned classes of prior measures. We establish that well-posedness relies on a balance between the growth of the log-likelihood function and the tail behavior of the prior and apply our results to special cases such as additive noise models and linear problems. Finally, we discuss some of the practical aspects of Bayesian inverse problems such as their consistent approximation and present three concrete examples of well-posed Bayesian inverse problems with heavy-tailed or stochastic process prior measures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.