This study is an extensive revision of the Climatic Research Unit (CRU) land station temperature database that has been used to produce a grid‐box data set of 5° latitude × 5° longitude temperature anomalies. The new database (CRUTEM4) comprises 5583 station records of which 4842 have enough data for the 1961–1990 period to calculate or estimate the average temperatures for this period. Many station records have had their data replaced by newly homogenized series that have been produced by a number of studies, particularly from National Meteorological Services (NMSs). Hemispheric temperature averages for land areas developed with the new CRUTEM4 data set differ slightly from their CRUTEM3 equivalent. The inclusion of much additional data from the Arctic (particularly the Russian Arctic) has led to estimates for the Northern Hemisphere (NH) being warmer by about 0.1°C for the years since 2001. The NH/Southern Hemisphere (SH) warms by 1.12°C/0.84°C over the period 1901–2010. The robustness of the hemispheric averages is assessed by producing five different analyses, each including a different subset of 20% of the station time series and by omitting some large countries. CRUTEM4 is also compared with hemispheric averages produced by reanalyses undertaken by the European Centre for Medium‐Range Weather Forecasts (ECMWF): ERA‐40 (1958–2001) and ERA‐Interim (1979–2010) data sets. For the NH, agreement is good back to 1958 and excellent from 1979 at monthly, annual, and decadal time scales. For the SH, agreement is poorer, but if the area is restricted to the SH north of 60°S, the agreement is dramatically improved from the mid‐1970s.
Six statistical and two dynamical downscaling models were compared with regard to their ability to downscale seven seasonal indices of heavy precipitation for two station networks in northwest and southeast England. The skill among the eight downscaling models was high for those indices and seasons that had greater spatial coherence. Generally, winter showed the highest downscaling skill and summer the lowest. The rainfall indices that were indicative of rainfall occurrence were better modelled than those indicative of intensity. Models based on non-linear artificial neural networks were found to be the best at modelling the inter-annual variability of the indices; however, their strong negative biases implied a tendency to underestimate extremes. A novel approach used in one of the neural network models to output the rainfall probability and the gamma distribution scale and shape parameters for each day meant that resampling methods could be used to circumvent the underestimation of extremes. Six of the models were applied to the Hadley Centre global circulation model HadAM3P forced by emissions according to two SRES scenarios. This revealed that the inter-model differences between the future changes in the downscaled precipitation indices were at least as large as the differences between the emission scenarios for a single model. This implies caution when interpreting the output from a single model or a single type of model (e.g. regional climate models) and the advantage of including as many different types of downscaling models, global models and emission scenarios as possible when developing climate-change projections at the local scale.
ABSTRACT:Weather typing, based on surface pressure charts, has been one of the principal means of analysis in synoptic climatology. Here, we summarize the history of manual and automated schemes, in the northwest European context, illustrating how the approaches can take advantage of the extended reanalysis products that have recently become available. The British Isles and the associated Lamb weather types (LWTs) are the focus of this study, but the approach can be applied to any mid-to-high latitude region to provide series back to 1871. However, caution is advised in the use of the approach where the quality and quantity of input data into the extended reanalysis are known to be temporally and spatially variable and/or poor in early years. This study intercompares the use of automated schemes with operational analyses, reanalyses and earlier manually derived LWTs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.