When carrying out design searches, traditional variable screening techniques can find it extremely difficult to distinguish between important and unimportant variables. This is particularly true when only a small number of simulations are combined with a parameterization that results in a large number of variables of seemingly equal importance. Here, the authors present a variable reduction technique that employs proper orthogonal decomposition to filter out undesirable or badly performing geometries from an optimization process. Unlike traditional screening techniques, the presented method operates at the geometric level instead of the variable level. The filtering process uses the designs that result from a geometry parameterization instead of the variables that control the parameterization. The method is shown to perform well in the optimization of a two-dimensional airfoil for the minimization of drag-to-lift ratio, producing designs better than those resulting from traditional krigingbased surrogate model optimization and with a significant reduction in surrogate tuning cost.
Multipoint objective functions are often employed within aerodynamic optimizations to prevent a reduction in offdesign performance. However, this typically results in the need for a significant number of simulations at a variety of design conditions to calculate the objective function. The following paper attempts to address this problem through the application of a multilevel cokriging model within the optimization process. A large number of single-point design simulations are augmented by a smaller number of multipoint simulations. The technique is shown to result in surrogate models as effective as those produced using a traditional multipoint process when optimizing a transonic airfoil but with a reduction in the total number of simulations.between expensive and cheap data K = total number of design conditions n = total number of sample points p = hyperparameter governing smoothness R = correlation matrix r = correlations between known and unknown points w = design point weighting X = matrix of design points x = vector of design variables y = vector of objective function values Zx = Gaussian process = hyperparameter governing correlation = regression constant = mean = scaling parameter 2 = variance = concentrated log likelihood Subscripts c = cheap data d = difference between cheap and expensive data e = expensive data
Surrogate models or metamodels are commonly used to exploit expensive computational simulations within a design optimization framework. The application of multi-fidelity surrogate modeling approaches has recently been gaining ground due to the potential for further reductions in simulation effort over single fidelity approaches. However, given a black box problem when exactly should a designer select a multi-fidelity approach over a single fidelity approach and vice versa? Using a series of analytical test functions and engineering design examples from the literature, the following paper illustrates the potential pitfalls of choosing one technique over the other without a careful consideration of the optimization problem at hand. These examples are then used to define and validate a set of guidelines for the creation of a multi-fidelity Kriging model. The resulting guidelines state that the different fidelity functions should be well correlated, that the amount of low fidelity data in the model should be greater than the amount of high fidelity data and that more than 10% and less than 80% of the total simulation budget should be spent on low fidelity simulations in order for the resulting multi-fidelity model to perform better than the equivalent costing high fidelity model.
Optimizations involving high fidelity simulations can become prohibitively expensive when an exhaustive search is employed. To remove this expense a surrogate model is often constructed. One of the most popular techniques for the construction of such a surrogate model is that of kriging. However, the construction of a kriging model requires the optimization of a multi-model likelihood function, the cost of which, can approach that of the high fidelity simulations upon which the model is based. The following paper describes the development of a hybridized particle swarm algorithm which aims to reduce the cost of this likelihood optimization by drawing on an efficient adjoint of the likelihood. This hybridized tuning strategy is compared to a number of other strategies with respect to the inverse design of an airfoil as well as the optimization of an airfoil for minimum drag at a fixed lift.
Abstract. To study climate change on multi-millennial timescales or to explore a model's parameter space, efficient models with simplified and parameterised processes are required. However, the reduction in explicitly modelled processes can lead to underestimation of some atmospheric responses that are essential to the understanding of the climate system. While more complex general circulations are available and capable of simulating a more realistic climate, they are too computationally intensive for these purposes. In this work, we propose a multi-level Gaussian emulation technique to efficiently estimate the outputs of steady-state simulations of an expensive atmospheric model in response to changes in boundary forcing. The link between a computationally expensive atmospheric model, PLASIM (Planet Simulator), and a cheaper model, EMBM (energy–moisture balance model), is established through the common boundary condition specified by an ocean model, allowing for information to be propagated from one to the other. This technique allows PLASIM emulators to be built at a low cost. The method is first demonstrated by emulating a scalar summary quantity, the global mean surface air temperature. It is then employed to emulate the dimensionally reduced 2-D surface air temperature field. Even though the two atmospheric models chosen are structurally unrelated, Gaussian process emulators of PLASIM atmospheric variables are successfully constructed using EMBM as a fast approximation. With the extra information gained from the cheap model, the multi-level emulator of PLASIM's 2-D surface air temperature field is built using only one-third the amount of expensive data required by the normal single-level technique. The constructed emulator is shown to capture 93.2 % of the variance across the validation ensemble, with the averaged RMSE of 1.33 °C. Using the method proposed, quantities from PLASIM can be constructed and used to study the effects introduced by PLASIM's atmosphere.
The process of likelihood maximization can be found in many different areas of computational modelling. However, the construction of such models via likelihood maximization requires the solution of a difficult multi-modal optimization problem involving an expensive O(n 3 ) factorization. The optimization techniques used to solve this problem may require many such factorizations and can result in a significant bottleneck. This article derives an adjoint formulation of the likelihood employed in the construction of a kriging model via reverse algorithmic differentiation. This adjoint is found to calculate the likelihood and all of its derivatives more efficiently than the standard analytical method and can therefore be used within a simple local search or within a hybrid global optimization to accelerate convergence and therefore reduce the cost of the likelihood optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.