[1] Climate change impacts assessment involves downscaling of coarse-resolution climate variables simulated by general circulation models (GCMs) using dynamic (physics-based) or statistical (data-driven) approaches. Here we use a statistical downscaling technique for projections of all-India monsoon rainfall at a resolution of 0.5 in latitude/longitude. The present statistical downscaling model utilizes classification and regression tree, and kernel regression and develops a statistical relationship between large-scale climate variables from reanalysis data and fine-resolution observed rainfall, and then applies the relationship to coarse-resolution GCM outputs. A GCM developed by the Canadian Centre for Climate Modeling and Analysis is employed for this study with its five ensemble runs for capturing intramodel uncertainty. The model appears to effectively capture individual station means, the spatial patterns of the standard deviations, and the cross correlation between station rainfalls. Computationally expensive dynamic downscaling models have been applied for India. However, our study is the first to attempt statistical downscaling for the entire country at a resolution of 0.5 . The downscaling model seems to capture the orographic effect on rainfall in mountainous areas of the Western Ghats and northeast India. The model also reveals spatially nonuniform changes in rainfall, with a possible increase for the western coastline and northeastern India (rainfall surplus areas) and a decrease in northern India, western India (rainfall deficit areas), and on the southeastern coastline, highlighting the need for a detailed hydrologic study that includes future projections regarding water availability which may be useful for water resource policy decisions.Citation: Salvi, K., S. Kannan, and S. Ghosh (2013), High-resolution multisite daily rainfall projections in India with statistical downscaling for climate change impacts assessment,
To understand the improvements in the simulations of Indian summer monsoon rainfall (ISMR) by Coupled Model Intercomparison Project 5 (CMIP5) over CMIP3, a comparative study is performed with the original and statistically downscaled outputs of five General Circulation Models (GCMs). We observe that multi-model average of original CMIP5 simulations do not show visible improvements in bias, over CMIP3. We also observe that CMIP5 original simulations have more multi-model uncertainty than those of CMIP3. The statistically downscaled simulations show similar results in terms of bias; however, the uncertainty in CMIP5 downscaled rainfall projections is lower than that of CMIP3.
Abstract. Extreme events such as heat waves, cold spells, floods, droughts, tropical cyclones, and tornadoes have potentially devastating impacts on natural and engineered systems and human communities worldwide. Stakeholder decisions about critical infrastructures, natural resources, emergency preparedness and humanitarian aid typically need to be made at local to regional scales over seasonal to decadal planning horizons. However, credible climate change attribution and reliable projections at more localized and shorter time scales remain grand challenges. Long-standing gaps include inadequate understanding of processes such as cloud physics and ocean-land-atmosphere interactions, limitations of physics-based computer models, and the importance of intrinsic climate system variability at decadal horizons. Meanwhile, the growing size and complexity of climate data from model simulations and remote sensors increases opportunities to address these scientific gaps. This perspectives article explores the possibility that physically cognizant mining of massive climate data may lead to significant advances in generating credible predictive insights about climate extremes and in turn translating them to actionable metrics and information for adaptation and policy. Specifically, we propose that data mining techniques geared towards extremes can help tackle the grand challenges in the development of interpretable climate projections, predictability, and uncertainty assessments. To be successful, scalable methods will need to handle what has been called "big data" to tease out elusive but robust statistics of extremes and change from what is ultimately small data. Physically based relationships (where available) and conceptual understanding (where appropriate) are needed to guide methods development and interpretation of results. Such approaches may be especially relevant in situations where computer models may not be able to fully encapsulate current process understanding, yet the wealth of data may offer additional insights. Large-scale interdisciplinary team efforts, involving domain experts and individual researchers who span disciplines, will be necessary to address the challenge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.