In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive high-fidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate, etc.) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the high-fidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization.
This work presents a nonintrusive projection-based model reduction approach for full models based on time-dependent partial differential equations. Projection-based model reduction constructs the operators of a reduced model by projecting the equations of the full model onto a reduced space. Traditionally, this projection is intrusive, which means that the full-model operators are required either explicitly in an assembled form or implicitly through a routine that returns the action of the operators on a given vector; however, in many situations the full model is given as a black box that computes trajectories of the full-model states and outputs for given initial conditions and inputs, but does not provide the full-model operators. Our nonintrusive operator inference approach infers approximations of the reduced operators from the initial conditions, inputs, trajectories of the states, and outputs of the full model, without requiring the full-model operators. Our operator inference is applicable to full models that are linear in the state or have a low-order polynomial nonlinear term. The inferred operators are the solution of a least-squares problem and converge, with sufficient state trajectory data, in the Frobenius norm to the reduced operators that would be obtained via an intrusive projection of the full-model operators. Our numerical results demonstrate operator inference on a linear climate model and on a tubular reactor model with a polynomial nonlinear term of third order.
Abstract. This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive high-fidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projection-based reduced models, data-fit models, support vector machines, and simplified-physics models, does not necessarily give rise to such a hierarchy. Our multifidelity approach provides a framework to combine an arbitrary number of surrogate models of any type. Instead of relying on error and cost rates, an optimization problem balances the number of model evaluations across the high-fidelity and surrogate models with respect to error and costs. We show that a unique analytic solution of the model management optimization problem exists under mild conditions on the models. Our multifidelity method makes occasional recourse to the high-fidelity model; in doing so it provides an unbiased estimator of the statistics of the high-fidelity model, even in the absence of error bounds and error estimators for the surrogate models. Numerical experiments with linear and nonlinear examples show that speedups by orders of magnitude are obtained compared to Monte Carlo estimation that invokes a single model only. 1. Introduction. Multilevel techniques have a long and successful history in computational science and engineering, e.g., multigrid for solving systems of equations [8,25,9], multilevel discretizations for representing functions [50,18,10], and multilevel Monte Carlo and multilevel stochastic collocation for estimating mean solutions of partial differential equations (PDEs) with stochastic parameters [27,22,45]. These multilevel techniques typically start with a fine-grid discretization-a high-fidelity model-of the underlying PDE or function. The fine-grid discretization is chosen to guarantee an approximation of the output of interest with the accuracy required by the current problem at hand. Additionally, a hierarchy of coarser discretizations-lowerfidelity surrogate models-is constructed, where a parameter (e.g., mesh width) controls the trade-off between error and computational costs. Changing this parameter gives rise to a multilevel hierarchy of discretizations with known error and cost rates. Multilevel techniques use these error and cost rates to distribute the computational work among the discretizations in the hierarchy, shifting most of the work onto the
a b s t r a c tSparse grids allow one to employ grid-based discretization methods in data-driven problems. We present an extension of the classical sparse grid approach that allows us to tackle highdimensional problems by spatially adaptive refinement, modified ansatz functions, and efficient regularization techniques. The competitiveness of this method is shown for typical benchmark problems with up to 166 dimensions for classification in data mining, pointing out properties of sparse grids in this context. To gain insight into the adaptive refinement and to examine the scope for further improvements, the approximation of non-smooth indicator functions with adaptive sparse grids has been studied as a model problem. As an example for an improved adaptive grid refinement, we present results for an edge-detection strategy.
This paper presents a new approach to construct more efficient reduced-order models for nonlinear partial differential equations with proper orthogonal decomposition and the discrete empirical interpolation method (DEIM). Whereas DEIM projects the nonlinear term onto one global subspace, our localized discrete empirical interpolation method (LDEIM) computes several local subspaces, each tailored to a particular region of characteristic system behavior. Then, depending on the current state of the system, LDEIM selects an appropriate local subspace for the approximation of the nonlinear term. In this way, the dimensions of the local DEIM subspaces, and thus the computational costs, remain low even though the system might exhibit a wide range of behaviors as it passes through different regimes. LDEIM uses machine learning methods in the offline computational phase to discover these regions via clustering. Local DEIM approximations are then computed for each cluster. In the online computational phase, machine-learning-based classification procedures select one of these local subspaces adaptively as the computation proceeds. The classification can be achieved using either the system parameters or a low-dimensional representation of the current state of the system obtained via feature extraction. The LDEIM approach is demonstrated for a reacting flow example of an H 2-Air flame. In this example, where the system state has a strong nonlinear dependence on the parameters, the LDEIM provides speedups of two orders of magnitude over standard DEIM.
We present Lift & Learn, a physics-informed method for learning low-dimensional models for large-scale dynamical systems. The method exploits knowledge of a system's governing equations to identify a coordinate transformation in which the system dynamics have quadratic structure. This transformation is called a lifting map because it often adds auxiliary variables to the system state. The lifting map is applied to data obtained by evaluating a model for the original nonlinear system. This lifted data is projected onto its leading principal components, and low-dimensional linear and quadratic matrix operators are fit to the lifted reduced data using a least-squares operator inference procedure. Analysis of our method shows that the Lift & Learn models are able to capture the system physics in the lifted coordinates at least as accurately as traditional intrusive model reduction approaches. This preservation of system physics makes the Lift & Learn models robust to changes in inputs. Numerical experiments on the FitzHugh-Nagumo neuron activation model and the compressible Euler equations demonstrate the generalizability of our model.
Abstract. This work presents a nonlinear model reduction approach for systems of equations stemming from the discretization of partial differential equations with nonlinear terms. Our approach constructs a reduced system with proper orthogonal decomposition and the discrete empirical interpolation method (DEIM); however, whereas classical DEIM derives a linear approximation of the nonlinear terms in a static DEIM space generated in an offline phase, our method adapts the DEIM space as the online calculation proceeds and thus provides a nonlinear approximation. The online adaptation uses new data to produce a reduced system that accurately approximates behavior not anticipated in the offline phase. These online data are obtained by querying the full-order system during the online phase, but only at a few selected components to guarantee a computationally efficient adaptation. Compared to the classical static approach, our online adaptive and nonlinear model reduction approach achieves accuracy improvements of up to three orders of magnitude in our numerical experiments with time-dependent and steady-state nonlinear problems. The examples also demonstrate that through adaptivity, our reduced systems provide valid approximations of the full-order systems outside of the parameter domains for which they were initially built in the offline phase.Key words. adaptive model reduction, nonlinear systems, empirical interpolation, proper orthogonal decomposition AMS subject classifications. 65M22, 65N22 DOI. 10.1137/1409891691. Introduction. Model reduction derives reduced systems of large-scale systems of equations, typically using an offline phase in which the reduced system is constructed from solutions of the full-order system, and an online phase in which the reduced system is executed repeatedly to generate solutions for the task at hand. In many situations, the reduced systems yield accurate approximations of the fullorder solutions but with orders of magnitude reduction in computational complexity. Model reduction exploits that often the solutions are not scattered all over the highdimensional solution space, but instead they form a low-dimensional manifold that can be approximated by a low-dimensional (linear) reduced space. In some cases, however, the manifold exhibits a complex and nonlinear structure that can only be approximated well by the linear reduced space if its dimension is chosen sufficiently high. Thus, solving the reduced system can become computationally expensive. We therefore propose a nonlinear approximation of the manifold. The nonlinear approximation is based on adapting the reduced system while the computation proceeds in the online phase, using newly generated data through limited queries to the full-order system at a few selected components. Our online adaptation leads to a reduced system that can more efficiently capture nonlinear structure in the manifold, it ensures computational efficiency by performing low-rank updates, and through the use of new
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.