Many parameter estimation problems arising in applications are best cast in the framework of Bayesian inversion. This allows not only for an estimate of the parameters, but also for the quantification of uncertainties in the estimates. Often in such problems the parameterto-data map is very expensive to evaluate, and computing derivatives of the map, or derivative-adjoints, may not be feasible. Additionally, in many applications only noisy evaluations of the map may be available. We propose an approach to Bayesian inversion in such settings that builds on the derivative-free optimization capabilities of ensemble Kalman inversion methods. The overarching approach is to first use ensemble Kalman sampling (EKS) to calibrate the unknown parameters to fit the data; second, to use the output of the EKS to emulate the parameter-to-data map; third, to sample from an approximate Bayesian posterior distribution in which the parameter-to-data map is replaced by its emulator. This results in a principled approach to approximate Bayesian inference that requires only a small number of evaluations of the (possibly noisy approximation of the) parameter-to-data map. It does not require derivatives of this map, but instead leverages the documented power of ensemble Kalman methods. Furthermore, the EKS has the desirable property that it evolves the parameter ensembles towards the regions in which the bulk of the parameter posterior mass is located, thereby locating them well for the emulation phase of the methodology. In essence, the EKS methodology provides a cheap solution to the design problem of where to place points in parameter space to efficiently train an emulator of the parameter-to-data map for the purposes of Bayesian inversion.
Moderate or Intense Low oxygen Dilution (MILD) combustion is a promising technology that offers high thermal efficiency and low pollutant emissions. This study investigates the MILD combustion characteristics of pulverized coal in a laboratory-scale self-recuperative furnace. High-volatile Kingston brown coal and low-volatile Bowen basin black coal with particle sizes in the range of 38−180 μm were injected into the furnace using either CO 2 or N 2 as a carrier gas. A water-cooled sampling probe was used to conduct in-furnace gas sampling. Measurements of in-furnace gas concentration of O 2 , CO, and NO, as well as exhaust gas emissions and in-furnace temperatures, are presented. The results suggest major differences between the two coals and minor differences associated with the carrier gas. It was found that the measured CO level of brown coal cases was 10 times higher than that of black coal cases. However, NO emission for brown coal was only 37% of that measured for black coal at an equivalence ratio of Φ = 0.88. Ash content analysis showed that black coal was not burnt effectively, which is thought to be due to the particle residence times being insufficient for complete combustion in the furnace. To augment the experimental measurements, computational fluid dynamic modeling was used to investigate the effects of coal particle size and inlet air momentum on furnace dynamics and global CO emissions. It is found that coal particle size affects the coal penetration depth within the furnace and the location of the particle's stagnation point. The effects of air inlet momentum are tested in two ways: first, by raising the inlet temperature at a constant mass flow rate, and, second, by increasing the mass flow rate at a constant temperature. In both cases, increasing the air jet momentum broadens the reaction zone and facilitates MILD combustion, but also lowers reaction rates and increases CO emissions.
Next-generation exascale machines with extreme levels of parallelism will provide massive computing resources for large scale numerical simulations of complex physical systems at unprecedented parameter ranges. However, novel numerical methods, scalable algorithms and re-design of current state-of-the art numerical solvers are required for scaling to these machines with minimal overheads. One such approach for partial differential equations based solvers involves computation of spatial derivatives with possibly delayed or asynchronous data using high-order asynchrony-tolerant (AT) schemes to facilitate mitigation of communication and synchronization bottlenecks without affecting the numerical accuracy. In the present study, an effective methodology of implementing temporal discretization using a multi-stage Runge-Kutta method with AT schemes is presented. Together these schemes are used to perform asynchronous simulations of canonical reacting flow problems, demonstrated in one-dimension including auto-ignition of a premixture, premixed flame propagation and non-premixed autoignition. Simulation results show that the AT schemes incur very small numerical errors in all key quantities of interest including stiff intermediate species despite delayed data at processing element (PE) boundaries. For simulations of supersonic flows, the degraded numerical accuracy of well-known shock-resolving WENO (weighted essentially non-oscillatory) schemes when used with relaxed synchronization is also discussed. To overcome this loss of accuracy, high-order AT-WENO schemes are derived and tested on linear and non-linear equations. Finally the novel AT-WENO schemes are demonstrated in the propagation of a detonation wave with delays at PE boundaries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.