Aquifer systems present intrinsic properties such as vulnerability, which is identified as the potential risk of groundwater pollution by contaminants generated by human activity. When there are surface sources of pollution, usually there is a direct relationship between high vulnerability and decreased water quality. Nevertheless, this relationship is not observed in all aquifers and so the causative circumstances of inconsistencies between aquifer vulnerability and water quality have been investigated. This work addresses the vulnerability assessment of the Chapala Marsh area, Mexico, using SINTACS analysis. The Chapala Marsh aquifer is characterized by a granular structure and a fractured recharge zone; there are natural and anthropogenic sources of pollution. The results show discrepancies between the vulnerability indices and groundwater quality, as indicated by the existence of vulnerable areas with good water quality and vice versa. This is because the SINTACS method works well when contaminants have only vertical movement. For scenarios with lateral movement of contaminants, the method of geographic weighted regression (GWR) is used to model the influence of potential sources of contaminants on the water quality.
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.