properties controlling the twenty-first century response to sustained 31 anthropogenic greenhouse-gas forcing were not fully sampled, 32 partially owing to a correlation between climate sensitivity and 33 aerosol forcing 7,8 , a tendency to overestimate ocean heat uptake 11,12 34 and compensation between short-wave and long-wave feedbacks 9 . 35This complicates the interpretation of the ensemble spread as Fig. S1).
Perturbed physics experiments are among the most comprehensive ways to address uncertainty in climate change forecasts. In these experiments, parameters and parametrizations in atmosphere-ocean general circulation models are perturbed across ranges of uncertainty, and results are compared with observations. In this paper, we describe the largest perturbed physics climate experiment conducted to date, the British Broadcasting Corporation (BBC) climate change experiment, in which the physics of the atmosphere and ocean are changed, and run in conjunction with a forcing ensemble designed to represent uncertainty in past and future forcings, under the A1B Special Report on Emissions Scenarios (SRES) climate change scenario.
[1] A number of studies have set out to obtain a range of atmosphere and ocean model behavior by perturbing parameters in a single climate model (perturbed physics ensemble: PPE). Early studies used shallow layer slab ocean or flux-adjusted coupled oceanatmosphere models to obtain a broad range of behavior as characterized by climate sensitivity. A recent study reports a relatively narrow range of sensitivities (2.2-3.2 C) in a PPE of 35 coupled models without flux adjustment, raising the question whether previous broad ranges were an artifact of the use of models that were not in top-of-atmosphere (TOA) energy balance. Moreover, no PPE experiment has reported a large spread of behavior of the ocean compared to that exhibited in a multi-model ensemble (MME) such as Coupled Model Intercomparison Project phase 3 (CMIP3). In this work, we randomly perturb model parameters of a coupled ocean-atmosphere general circulation model using a space-filling design containing 10,000 combinations. The ensemble is run over the distributed computing platform of climateprediction.net under fixed pre-industrial forcing without flux adjustment. We resample a second, 20,000-member, ensemble with perturbations conditioned on the TOA fluxes from the first ensemble to not drift significantly from a realistic base state while targeting a range of behavior. Models within the targeted ensemble show realistic regional control climates when compared to the CMIP3 ensemble, although there is a bias in global mean surface temperature. The range of predicted equilibrium climate sensitivities of the targeted ensemble is substantially smaller than that obtained with flux adjustment, but larger than the range in the CMIP3 ensemble or in the 35-model un-flux-adjusted PPE in a recent study mentioned above. The Atlantic meridional overturning circulation in the targeted ensemble exhibits a spread in strength as wide as that found in the CMIP3 ensemble. We conclude that flux adjustment is not a prerequisite for obtaining a broad spread of behavior in a perturbed physics ensemble.Citation: Yamazaki, K., et al. (2013), Obtaining diverse behaviors in a climate model without the use of flux adjustments,
Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss-Newton line-search algorithm. 1) A standard Gauss-Newton algorithm in which, in each iteration, all parameters were perturbed. 2) A randomized block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimized used multiple large-scale observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (3rd Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution. For the HadAM3 model, cases with seven and fourteen parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and ran for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss-Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm with 6 parameters being randomly perturbed about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed algorithm performance was poor. Our results suggest the computational cost for the Gauss-Newton algorithm scales between P and P2 where P is the number of parameters being calibrated. For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand tuned configuration. An algorithm in which six out of thirteen parameters were randomly perturbed failed to converge. These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human driven trial and error. However, convergence and costs are, likely, sensitive to details of the algorithm.
Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss–Newton line-search algorithm: (1) a standard Gauss–Newton algorithm in which, in each iteration, all parameters were perturbed and (2) a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution.For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss–Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being randomly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss–Newton algorithm scales between P and P2, where P is the number of parameters being calibrated.For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge.These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.