We describe the method of history matching, a method currently used to help quantify parametric uncertainty in climate models, and argue for its use in identifying and removing structural biases in climate models at the model development stage. We illustrate the method using an investigation of the potential to improve upon known ocean circulation biases in a coupled non-flux-adjusted climate model (the third Hadley Centre Climate Model; HadCM3). In particular, we use history matching to investigate whether or not the behaviour of the Antarctic Circumpolar Current (ACC), which is known to be too strong in HadCM3, represents a structural bias that could be corrected using the model parameters. We find that it is possible to improve the ACC strength using the parameters and observe that doing this leads to more realistic representations of the sub-polar and sub-tropical gyres, sea surface salinities (both globally and in the North Atlantic), sea surface temperatures in the sinking regions in the North Atlantic and in the Southern Ocean, North Atlantic Deep Water flows, global precipitation, wind fields and sea level pressure. We then use history matching to locate a region of parameter space predicted not to contain structural biases for ACC and SSTs that is around 1% of the original parameter space. We explore qualitative features of this space and show that certain key ocean and atmosphere parameters must be tuned carefully together in order to locate climates that satisfy our chosen metrics. Our study shows that attempts to tune climate model parameters that vary only a handful of parameters relevant to a given process at a time will not be as successful or as efficient as history matching.
The calibration of complex computer codes using uncertainty quantification (UQ) methods is a rich area of statistical methodological development. When applying these techniques to simulators with spatial output, it is now standard to use principal component decomposition to reduce the dimensions of the outputs in order to allow Gaussian process emulators to predict the output for calibration. We introduce the 'terminal case', in which the model cannot reproduce observations to within model discrepancy, and for which standard calibration methods in UQ fail to give sensible results. We show that even when there is no such issue with the model, the standard decomposition on the outputs can and usually does lead to a terminal case analysis. We present a simple test to allow a practitioner to establish whether their experiment will result in a terminal case analysis, and a methodology for defining calibration-optimal bases that avoid this whenever it is not inevitable. We present the optimal rotation algorithm for doing this, and demonstrate its efficacy for an idealised example for which the usual principal component methods fail. We apply these ideas to the CanAM4 model to demonstrate the terminal case issue arising for climate models. We discuss climate model tuning and the estimation of model discrepancy within this context, and show how the optimal rotation algorithm can be used in developing practical climate model tuning tools. * The authors gratefully acknowledge support from EPSRC fellowship No. EP/K019112/1 and support from the NSERC funded Canadian Network for Regional Climate and Weather Processes (CNRCWP). We would also like to thank Yanjun Jiao for managing our ensembles of CanAM4.
Atmospheric global or regional circulation models used either for numerical weather prediction (NWP) or climate studies encompass a dynamical core and a physical component. The dynamical core computes the spatio-temporal evolution of atmospheric state variables by solving a discrete version of the fluid dynamic equations. The physical component quantifies the impact on the resolved variables of radiative, thermodynamical, and chemical processes, as well as dynamical processes that occur at scales smaller than the computational grid. These processes are handled by a suite of sub-models, most often referred to as parameterizations, which provide source terms in the resolved-scale equations. Parameterizations (e.g., turbulence, convection, radiation, microphysics) are often based on a mixture of physical principles and heuristic description of the involved processes, of their interactions and of their impact on the larger resolved scales. Although it is difficult to trace back the origin of the term "parameterization" in climate modeling, it semantically points to the fact that the sub-models summarize the processes as functions of the model state vector x (typically the value of zonal and meridional wind, temperature, and water phases at each point of the three-dimensional [3D] model grid) that depends on some free parameters. These free parameters arise from the simplification of the complex nature of the subgrid processes (e.g., assuming a bulk thermal plume instead of a population of plumes, stationarity). The atmospheric model can be summarized as () (,)
Atmospheric global or regional circulation models used either for numerical weather prediction (NWP) or climate studies encompass a dynamical core and a physical component. The dynamical core computes the spatio-temporal evolution of atmospheric state variables by solving a discrete version of the fluid dynamic equations. The physical component quantifies the impact on the resolved variables of radiative, thermodynamical, and chemical processes, as well as dynamical processes that occur at scales smaller than the computational grid. These processes are handled by a suite of sub-models, most often referred to as parameterizations, which provide source terms in the resolved-scale equations. Parameterizations (e.g., turbulence, convection, radiation, microphysics) are often based on a mixture of physical principles and heuristic description of the involved processes, of their interactions and of their impact on the larger resolved scales. Although it is difficult to trace back the origin of the term "parameterization" in climate modeling, it semantically points to the fact that the sub-models summarize the processes as functions of the model state vector x (typically the value of zonal and meridional wind, temperature, and water phases at each point of the three-dimensional [3D] model grid) that depends on some free parameters. These free parameters arise from the simplification of the complex nature of the subgrid processes (e.g., assuming a bulk thermal plume instead of a population of plumes, stationarity). The atmospheric model can be summarized as () (,)
Expensive computer codes, particularly those used for simulating environmental or geological processes, such as climate models, require calibration (sometimes called tuning). When calibrating expensive simulators using uncertainty quantification methods, it is usually necessary to use a statistical model called an emulator in place of the computer code when running the calibration algorithm. Though emulators based on Gaussian processes are typically many orders of magnitude faster to evaluate than the simulator they mimic, many applications have sought to speed up the computations by using regression‐only emulators within the calculations instead, arguing that the extra sophistication brought using the Gaussian process is not worth the extra computational power. This was the case for the analysis that produced the UK climate projections in 2009. In this paper, we compare the effectiveness of both emulation approaches upon a multi‐wave calibration framework that is becoming popular in the climate modeling community called “history matching.” We find that Gaussian processes offer significant benefits to the reduction of parametric uncertainty over regression‐only approaches. We find that in a multi‐wave experiment, a combination of regression‐only emulators initially, followed by Gaussian process emulators for refocussing experiments can be nearly as effective as using Gaussian processes throughout for a fraction of the computational cost. We also discover a number of design and emulator‐dependent features of the multi‐wave history matching approach that can cause apparent, yet premature, convergence of our estimates of parametric uncertainty. We compare these approaches to calibration in idealized examples and apply it to a well‐known geological reservoir model.
Global assessments of air quality and health require comprehensive estimates of the exposures to air pollution that are experienced by populations in every country. However, there are many countries in which measurements from ground-based monitoring are sparse or non-existent, with quality-control and representativeness providing additional challenges. While ground-based monitoring provides a far from complete picture of global air quality, there are other sources of information that provide comprehensive coverage across the globe. The World Health Organization developed the Data Integration Model for Air Quality (DIMAQ) to combine information from ground measurements with that from other sources, such as atmospheric chemical transport models and estimates from remote sensing satellites in order to produce the information that is required for health burden assessment and the calculation of air pollution-related Sustainable Development Goals indicators. Here, we show an example of the use of DIMAQ with the Copernicus Atmosphere Monitoring Service Re-Analysis (CAMSRA) of atmospheric composition, which represents the best practices in meteorology and climate monitoring that were developed under the World Meteorological Organization’s Global Atmosphere Watch programme. Estimates of PM2.5 from CAMSRA are integrated within the DIMAQ framework in order to produce high-resolution estimates of air pollution exposure that can be aggregated in a coherent fashion to produce country-level assessments of exposures.
The version presented here may differ from the published version. If citing, you are advised to consult the published version for pagination, volume/issue and date of publication
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.