Most atmospheric motions of different spatial scales and precipitation are closely related to phase transitions in clouds. The continuously increasing resolution of large-scale and mesoscale atmospheric models makes it feasible to treat the evolution of individual clouds. The explicit treatment of clouds requires the simulation of cloud microphysics. Two main approaches describing cloud microphysical properties and processes have been developed in the past four and a half decades: bulk microphysics parameterization and spectral (bin) microphysics (SBM). The development and utilization of both represent an important step forward in cloud modeling. This study presents a detailed survey of the physical basis and the applications of both bulk microphysics parameterization and SBM. The results obtained from simulations of a wide range of atmospheric phenomena, from tropical cyclones through Arctic clouds using these two approaches are compared. Advantages and disadvantages, as well as lines of future development for these methods are discussed.
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans, land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined.
Entrainment and detrainment processes have been recognised for a long time as key processes for cumulus convection and have recently witnessed a regrowth of interest mainly due to the capability of large-eddy simulations (LES) to diagnose these processes in more detail. This article has a twofold purpose. Firstly, it provides a historical overview of the past research on these mixing processes, and secondly, it highlights more recent important developments. These include both fundamental process studies using LES aiming to improve our understanding of the mixing process, but also more practical studies targeted toward an improved parametrised representation of entrainment and detrainment in large-scale models. A highlight of the fundamental studies resolves a long-lasting controversy by showing that lateral entrainment is the dominant mixing mechanism in comparison with the cloud-top entrainment in shallow cumulus convection. The more practical studies provide a wide variety of new parametrisations with sometimes conflicting approaches to the way in which the effect of the free tropospheric humidity on the lateral mixing is taken into account. An important new insight that will be highlighted is that, despite the focus in the literature on entrainment, it appears that it is rather the detrainment process that determines the vertical structure of the convection in general and the mass flux especially. Finally, in order to speed up progress and stimulate convergence in future parametrisations, stronger and more systematic use of LES is advocated.
For decades, enhancement of ice concentrations above those of active ice nucleus aerosols was observed in deep clouds with tops too warm for homogeneous freezing, indicating fragmentation of ice (multiplication). Several possible mechanisms of fragmentation have been suggested from laboratory studies, and one of these involves fragmentation in ice–ice collisions. In this two-part paper, the role of breakup in ice–ice collisions in a convective storm consisting of many cloud types is assessed with a modeling approach. The colliding ice particles can belong to any microphysical species, such as crystals, snow, graupel, hail, or freezing drops. In the present study (Part I), a full physical formulation of initiation of cloud ice by mechanical breakup in collisions involving snow, graupel, and/or hail is developed based on an energy conservation principle. Theoretically uncertain parameters are estimated by simulating laboratory and field experiments already published in the literature. Here, collision kinetic energy (CKE) is the fundamental governing variable of fragmentation in any collision, because it measures the energy available for breakage by work done to create the new surface of fragments. The developed formulation is general in the sense that it includes all the types of fragmentation observed in previous published studies and encompasses collisions of either snow or crystals with graupel/hail, collisions among only graupel/hail, and collisions among only snow/crystals. It explains the observed dependencies on CKE, size, temperature, and degree of prior riming.
Ice in atmospheric clouds undergoes complex physical processes, interacting especially with radiation, which leads to serious impacts on global climate. After their primary production, atmospheric ice crystals multiply extensively by secondary processes. Here, it is shown that a mostly overlooked process of mechanical breakup of ice particles by ice–ice collisions contributes to such observed multiplication. A regime for explosive multiplication is identified in its phase space of ice multiplication efficiency and number concentration of ice particles. Many natural mixed-phase clouds, if they have copious millimeter-sized graupel, fall into this explosive regime. The usual Hallett–Mossop (H–M) process of ice multiplication is shown to dominate the overall ice multiplication when active, as it starts sooner, compared to the breakup ice multiplication process. However, for deep clouds with a cold base temperature where the usual H–M process is inactive, the ice breakup mechanism should play a critical role. Supercooled rain, which may freeze to form graupel directly in only a few minutes, is shown to hasten such ice multiplication by mechanical breakup, with an ice enhancement ratio exceeding 104 approximately 20 min after small graupel first appear. The ascent-dependent onset of subsaturation with respect to liquid water during explosive ice multiplication is predicted to determine the eventual ice concentrations.
After extensive efforts over the course of a decade, convective-scale weather forecasts with horizontal grid spacings of 1–5 km are now operational at national weather services around the world, accompanied by ensemble prediction systems (EPSs). However, though already operational, the capacity of forecasts for this scale is still to be fully exploited by overcoming the fundamental difficulty in prediction: the fully three-dimensional and turbulent nature of the atmosphere. The prediction of this scale is totally different from that of the synoptic scale (103 km), with slowly evolving semigeostrophic dynamics and relatively long predictability on the order of a few days. Even theoretically, very little is understood about the convective scale compared to our extensive knowledge of the synoptic-scale weather regime as a partial differential equation system, as well as in terms of the fluid mechanics, predictability, uncertainties, and stochasticity. Furthermore, there is a requirement for a drastic modification of data assimilation methodologies, physics (e.g., microphysics), and parameterizations, as well as the numerics for use at the convective scale. We need to focus on more fundamental theoretical issues—the Liouville principle and Bayesian probability for probabilistic forecasts—and more fundamental turbulence research to provide robust numerics for the full variety of turbulent flows. The present essay reviews those basic theoretical challenges as comprehensibly as possible. The breadth of the problems that we face is a challenge in itself: an attempt to reduce these into a single critical agenda should be avoided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.