Precursor nanoparticles that form spontaneously on hydrolysis of tetraethylorthosilicate in aqueous solutions of tetrapropylammonium (TPA) hydroxide evolve to TPA-silicalite-1, a molecular-sieve crystal that serves as a model for the self-assembly of porous inorganic materials in the presence of organic structure-directing agents. The structure and role of these nanoparticles are of practical significance for the fabrication of hierarchically ordered porous materials and molecular-sieve films, but still remain elusive. Here we show experimental findings of nanoparticle and crystal evolution during room-temperature ageing of the aqueous suspensions that suggest growth by aggregation of nanoparticles. A kinetic mechanism suggests that the precursor nanoparticle population is distributed, and that the 5-nm building units contributing most to aggregation only exist as an intermediate small fraction. The proposed oriented-aggregation mechanism should lead to strategies for isolating or enhancing the concentration of crystal-like nanoparticles.
Recently, Gillespie introduced the tau-leap approximate, accelerated stochastic Monte Carlo method for well-mixed reacting systems [J. Chem. Phys. 115, 1716 (2001)]. In each time increment of that method, one executes a number of reaction events, selected randomly from a Poisson distribution, to enable simulation of long times. Here we introduce a binomial distribution tau-leap algorithm (abbreviated as BD-tau method). This method combines the bounded nature of the binomial distribution variable with the limiting reactant and constrained firing concepts to avoid negative populations encountered in the original tau-leap method of Gillespie for large time increments, and thus conserve mass. Simulations using prototype reaction networks show that the BD-tau method is more accurate than the original method for comparable coarse-graining in time.
Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.
In this paper we present a new class of coarse-grained stochastic processes and Monte Carlo simulations, derived directly from microscopic lattice systems and describing mesoscopic length scales. As our primary example, we mainly focus on a microscopic spin-flip model for the adsorption and desorption of molecules between a surface adjacent to a gas phase, although a similar analysis carries over to other processes. The new model can capture large scale structures, while retaining microscopic information on intermolecular forces and particle fluctuations. The requirement of detailed balance is utilized as a systematic design principle to guarantee correct noise fluctuations for the coarse-grained model. We carry out a rigorous asymptotic analysis of the new system using techniques from large deviations and present detailed numerical comparisons of coarse-grained and microscopic Monte Carlo simulations. The coarse-grained stochastic algorithms provide large computational savings without increasing programming complexity or the CPU time per executed event compared to microscopic Monte Carlo simulations.
Uncertainty quantification is a primary challenge for reliable modeling and simulation of complex stochastic dynamics. Such problems are typically plagued with incomplete information that may enter as uncertainty in the model parameters, or even in the model itself. Furthermore, due to their dynamic nature, we need to assess the impact of these uncertainties on the transient and long-time behavior of the stochastic models and derive corresponding uncertainty bounds for observables of interest. A special class of such challenges is parametric uncertainties in the model and in particular sensitivity analysis along with the corresponding sensitivity bounds for stochastic dynamics. Moreover, sensitivity analysis can be further complicated in models with a high number of parameters that render straightforward approaches, such as gradient methods, impractical. In this paper, we derive uncertainty and sensitivity bounds for path-space observables of stochastic dynamics in terms of new goal-oriented divergences; the latter incorporate both observables and information theory objects such as the relative entropy rate. These bounds are tight, depend on the variance of the particular observable and are computable through Monte Carlo simulation. In the case of sensitivity analysis, the derived sensitivity bounds rely on the path Fisher Information Matrix, hence they depend only on local dynamics and are gradient-free. These features allow for computationally efficient implementation in systems with a high number of parameters, e.g., complex reaction networks and molecular simulations.Version: July 15, 2015 1 2. Uncertainty quantification information inequalities and sensitivity bounds.2.1. Distances and divergences of probability measures. Bounds of the type (1.1) are based on characterizing a distance or divergence between the measures, Q, P , under which the averages are evaluated. While our primary goal is to characterize the bounds based on relative entropy, other divergences can be c>0
Prototype coarse-grained stochastic parametrizations for the interaction with unresolved features of tropical convection are developed here. These coarse-grained stochastic parametrizations involve systematically derived birth͞death processes with low computational overhead that allow for direct interaction of the coarse-grained dynamical variables with the smaller-scale unresolved fluctuations. It is established here for an idealized prototype climate scenario that, in suitable regimes, these coarse-grained stochastic parametrizations can significantly impact the climatology as well as strongly increase the wave fluctuations about an idealized climatology. The current practical models for prediction of both weather and climate involve general circulation models (GCMs) where the physical equations for these extremely complex f lows are discretized in space and time and the effects of unresolved processes are parametrized according to various recipes. With the current generation of supercomputers, the smallest possible mesh spacings are Ϸ50 -100 km for shortterm weather simulations and of order 200 -300 km for shortterm climate simulations. There are many important physical processes that are unresolved in such simulations such as the mesoscale sea-ice cover, the cloud cover in subtropical boundary layers, and deep convective clouds in the tropics. An appealing way to represent these unresolved features is through a suitable coarse-grained stochastic model that simultaneously retains crucial physical features of the interaction between the unresolved and resolved scales in a GCM. In recent work in two different contexts, the authors have developed both a systematic stochastic strategy (1) to parametrize key features of deep convection in the tropics involving suitable stochastic spin-f lip models and also a systematic mathematical strategy to coarse-grain such microscopic stochastic models (2) to practical mesoscopic meshes in a computationally efficient manner while retaining crucial physical properties of the interaction. This last work (2) is general with potential applications in material sciences, sea-ice modeling, etc. Crucial new scientific issues involve the fashion in which a stochastic model effects the climate mean state and the strength and nature of f luctuations about the climate mean. The main topic of this article is to discuss development of a family of coarse-grained stochastic models for tropical deep convection by combining the systematic strategies from refs. 1 and 2 and to explore their effect on both the climate mean and f luctuations for an idealized prototype model parametrization in the simplest scenario for tropical climate involving the Walker circulation, the east-west climatological state that arises from local region of enhanced surface heat f lux, mimicking the Indonesian marine continent.
We propose a new sensitivity analysis methodology for complex stochastic dynamics based on the Relative Entropy Rate. The method becomes computationally feasible at the stationary regime of the process and involves the calculation of suitable observables in path space for the Relative Entropy Rate and the corresponding Fisher Information Matrix. The stationary regime is crucial for stochastic dynamics and here allows us to address the sensitivity analysis of complex systems, including examples of processes with complex landscapes that exhibit metastability, non-reversible systems from a statistical mechanics perspective, and high-dimensional, spatially distributed models. All these systems exhibit, typically non-gaussian stationary probability distributions, while in the case of high-dimensionality, histograms are impossible to construct directly. Our proposed methods bypass these challenges relying on the direct Monte Carlo simulation of rigorously derived observables for the Relative Entropy Rate and Fisher Information in path space rather than on the stationary probability distribution itself. We demonstrate the capabilities of the proposed methodology by focusing here on two classes of problems: (a) Langevin particle systems with either reversible (gradient) or non-reversible (non-gradient) forcing, highlighting the ability of the method to carry out sensitivity analysis in non-equilibrium systems; and, (b) spatially extended Kinetic Monte Carlo models, showing that the method can handle high-dimensional problems.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers