In this paper, we present a comprehensive review of the data sources and estimation methods of 30 currently available global precipitation data sets, including gauge‐based, satellite‐related, and reanalysis data sets. We analyzed the discrepancies between the data sets from daily to annual timescales and found large differences in both the magnitude and the variability of precipitation estimates. The magnitude of annual precipitation estimates over global land deviated by as much as 300 mm/yr among the products. Reanalysis data sets had a larger degree of variability than the other types of data sets. The degree of variability in precipitation estimates also varied by region. Large differences in annual and seasonal estimates were found in tropical oceans, complex mountain areas, northern Africa, and some high‐latitude regions. Overall, the variability associated with extreme precipitation estimates was slightly greater at lower latitudes than at higher latitudes. The reliability of precipitation data sets is mainly limited by the number and spatial coverage of surface stations, the satellite algorithms, and the data assimilation models. The inconsistencies described limit the capability of the products for climate monitoring, attribution, and model validation.
The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling with its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. We discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.
Surface air temperature outputs from 16 global climate models participating in the sixth phase of the coupled model intercomparison project (CMIP6) were used to evaluate agreement with observations over the global land surface for the period 1901–2014. Projections of multi-model mean under four different shared socioeconomic pathways were also examined. The results reveal that the majority of models reasonably capture the dominant features of the spatial variations in observed temperature with a pattern correlation typically greater than 0.98, but with large variability across models and regions. In addition, the CMIP6 mean can capture the trends of global surface temperatures shown by the observational data during 1901–1940 (warming), 1941–1970 (cooling) and 1971–2014 (rapid warming). By the end of the 21st century, the global temperature under different scenarios is projected to increase by 1.18 °C/100 yr (SSP1-2.6), 3.22 °C/100 yr (SSP2-4.5), 5.50 °C/100 yr (SSP3-7.0) and 7.20 °C/100 yr (SSP5-8.5), with greater warming projected over the high latitudes of the northern hemisphere and weaker warming over the tropics and the southern hemisphere. Results of probability density distributions further indicate that large increases in the frequency and magnitude of warm extremes over the global land may occur in the future.
Computer simulation models have been widely used to generate hydrometeorological forecasts. As the raw forecasts contain uncertainties arising from various sources, including model inputs and outputs, model initial and boundary conditions, model structure, and model parameters, it is necessary to apply statistical postprocessing methods to quantify and reduce those uncertainties. Different postprocessing methods have been developed for meteorological forecasts (e.g., precipitation) and for hydrological forecasts (e.g., streamflow) due to their different statistical properties. In this paper, we conduct a comprehensive review of the commonly used statistical postprocessing methods for both meteorological and hydrological forecasts. Moreover, methods to generate ensemble members that maintain the observed spatiotemporal and intervariable dependency are reviewed. Finally, some perspectives on the further development of statistical postprocessing methods for hydrometeorological ensemble forecasting are provided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.