Abstract. Global fire-vegetation models are widely used to assess impacts of environmental change on fire regimes and the carbon cycle and to infer relationships between climate, land use and fire. However, differences in model structure and parameterizations, in both the vegetation and fire components of these models, could influence overall model performance, and to date there has been limited evaluation of how well different models represent various aspects of fire regimes. The Fire Model Intercomparison Project (FireMIP) is coordinating the evaluation of state-of-the-art global fire models, in order to improve projections of fire characteristics and fire impacts on ecosystems and human societies in the context of global environmental change. Here we perform a systematic evaluation of historical simulations made by nine FireMIP models to quantify their ability to reproduce a range of fire and vegetation benchmarks. The FireMIP models simulate a wide range in global annual total burnt area (39–536 Mha) and global annual fire carbon emission (0.91–4.75 Pg C yr−1) for modern conditions (2002–2012), but most of the range in burnt area is within observational uncertainty (345–468 Mha). Benchmarking scores indicate that seven out of nine FireMIP models are able to represent the spatial pattern in burnt area. The models also reproduce the seasonality in burnt area reasonably well but struggle to simulate fire season length and are largely unable to represent interannual variations in burnt area. However, models that represent cropland fires see improved simulation of fire seasonality in the Northern Hemisphere. The three FireMIP models which explicitly simulate individual fires are able to reproduce the spatial pattern in number of fires, but fire sizes are too small in key regions, and this results in an underestimation of burnt area. The correct representation of spatial and seasonal patterns in vegetation appears to correlate with a better representation of burnt area. The two older fire models included in the FireMIP ensemble (LPJ–GUESS–GlobFIRM, MC2) clearly perform less well globally than other models, but it is difficult to distinguish between the remaining ensemble members; some of these models are better at representing certain aspects of the fire regime; none clearly outperforms all other models across the full range of variables assessed.
Direct numerical simulation data show that the variance of the coupling term in passive scalar advection by a random velocity field is smaller than it would be if the velocity and scalar fields were statistically independent. This effect is analogous to the "depression of nonlinearity" in hydrodynamic turbulence. We show that the trends observed in the numerical data are qualitatively consistent with the predictions of closure theories related to Kraichnan's direct interaction approximation. The phenomenon is demonstrated over a range of Prandtl numbers. In the inertial-convective range the depletion is approximately constant with respect to wavenumber. The effect is weaker in the Batchelor range
International audienceThe velocity increment (VI) model, which was introduced by Brun et al., is improved by employing the Kolmogorov equation of filtered velocity in this paper. This model has two different formulations: a dynamic formulation and a simplified constant form in high Reynolds number turbulence. A priori tests in isotropic turbulence and wall-bounded turbulence are performed. A posteriori tests of decaying turbulence and channel Poiseuille flow are made to testify the model performance, especially on the energy backscatter. The simple constant coefficient formulation has good performance, and avoids the ensemble average operation, which exists in other subgrid models. This constant improved VI model is particularly proposed in complicated large-eddy simulation projects
Among existing subgrid scale models for large-eddy simulation (LES) some are time-reversible in the sense that the dynamics evolve backwards in time after a transformation u → −u at every point in space. In practice, reversible subgrid models reduce the numerical stability of the simulations since the effect of the subgrid scales is no longer strictly dissipative. This lack of stability constitutes often a criterion to reject this kind of models. The aim of this paper is to examine whether timereversibility can constitute a criterion that a subgrid model has to fulfill, or has not to. Thereto we investigate by direct numerical simulation the time-dependence of the kinetic energy of the resolved scales when the velocity is reversed in all or part of the lengthscales of the turbulent flow. These results are compared with results from existing LES subgrid models. It is argued that the criterion of time-reversibility to assess subgrid models is incompatible with the main underlying assumption of LES.
SUMMARYFollowing the procedure proposed in Quinlan et al. (Int. J. Numer. Meth. Engng. 2006; 66:2064-2085) for a 1D generic derivative, a 3D formulation of the Smoothed Particle Hydrodynamics (SPH) truncation error ( T ) has been derived and validated. We have then underlined the differences between traditional SPH simulations, which are not consistent, and estimations using renormalization, a first-order consistency technique. The consistency order is here defined as the highest degree of a generic polynomial function, which can be exactly reproduced by an SPH approximation.Under the homogeneous conditions assumed in our analyses renormalization generally reduces the relative truncation error by 1 or 2 orders of magnitude, both at inner points and boundary locations. Due to renormalization the error tends to a lowest constant value as the kernel support size (h) goes to zero, while in general with no consistency the error behaves like 1/h. In contrast to formulations without any consistency estimations, using renormalization there is a weak dependence of the error on the absolute value of the displacement of the particles from their volume barycentre ( ). In addition, for simulations with renormalization, the best choice for the kernel function seems to be the closest to Dirac's delta, while for the ones with no consistency, the preferences are altered. Furthermore, we observe that renormalization reduces the number of neighbors that are necessary to obtain a discretization error that is negligible with respect to the integral error.
Among existing subgrid scale models for large-eddy simulation (LES) some are time-reversible in the sense that the dynamics evolve backwards in time after a transformation u → −u at every point in space. In practice, reversible subgrid models reduce the numerical stability of the simulations since the effect of the subgrid scales is no longer strictly dissipative. This lack of stability constitutes often a criterion to reject this kind of models. The aim of this paper is to examine whether timereversibility can constitute a criterion that a subgrid model has to fulfill, or has not to. Thereto we investigate by direct numerical simulation the time-dependence of the kinetic energy of the resolved scales when the velocity is reversed in all or part of the lengthscales of the turbulent flow. These results are compared with results from existing LES subgrid models. It is argued that the criterion of time-reversibility to assess subgrid models is incompatible with the main underlying assumption of LES.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.