2022
DOI: 10.1007/978-3-031-04083-2_16
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science

Abstract: In recent years, artificial intelligence and specifically artificial neural networks (NNs) have shown great success in solving complex, nonlinear problems in earth sciences. Despite their success, the strategies upon which NNs make decisions are hard to decipher, which prevents scientists from interpreting and building trust in the NN predictions; a highly desired and necessary condition for the further use and exploitation of NNs’ potential. Thus, a variety of methods have been recently introduced with the ai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 68 publications
0
8
0
Order By: Relevance
“…We implement the LRP z rule for back propagation, which was found by Mamalakis et al. (2021) to be a well performing XAI method using a benchmark climate data set similar to ours. To improve interpretation and reduce the amount of noise in the LRP heatmaps, we only focus on positive areas of relevance, which are features that contribute positively to the ANN's prediction output.…”
Section: Methodsmentioning
confidence: 92%
“…We implement the LRP z rule for back propagation, which was found by Mamalakis et al. (2021) to be a well performing XAI method using a benchmark climate data set similar to ours. To improve interpretation and reduce the amount of noise in the LRP heatmaps, we only focus on positive areas of relevance, which are features that contribute positively to the ANN's prediction output.…”
Section: Methodsmentioning
confidence: 92%
“…Appropriate use and/or experimentation with multiple baselines will be advantageous for many XAI-pursued goals in geoscientific applications. These include better deciphering the decision-making process of the network (e.g., McGovern et al, 2019;Toms et al, 2020), accelerating learning new science (e.g., Sonnewald and Lguensat, 2021;Clare et al, 2022;Mamalakis et al, 2022a) and potentially helping identify problems in the training dataset (Sun et al, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…EXplainable Artificial Intelligence (XAI) aims to provide insights about the decision-making process of AI models and has been increasingly applied to the geosciences (e.g., Toms et al, 2021;Ebert-Uphoff and Hilburn, 2020;Hilburn et al, 2021;Barnes et al, 2019Barnes et al, , 2020Mayer and Barnes, 2021;Keys et al, 2021;Sonnewald and Lguensat, 2021). XAI methods show promising results in calibrating model trust, and assisting in learning new science (see for example, McGovern et al, 2019;Toms et al, 2020;Sonnewald and Lguensat, 2021;Clare et al, 2022;Mamalakis et al, 2022a). A popular subcategory of XAI is the so-called local attribution methods, which compute the attribution of a model's prediction to the input variables (also referred to as "input features").…”
Section: Introductionmentioning
confidence: 99%
“…Efforts like XAI have demonstrated the potential to uncover new statistical relationships between model inputs and outputs. In addition to improving existing climate models, and complementing physics‐based models in the hierarchy through better parameterizations, calibration, and UQ, some claim that ML‐based models may also provide a novel approach to modeling physical systems and unraveling new climate patterns, teleconnections, and mechanisms through a potentially more nuanced process isolation and learning (Mamalakis et al., 2022)—which is one of the central goals of having a model hierarchy. Incorporation of ML algorithms has already led to novel discoveries in medicine (Stokes et al., 2020), astronomy (Valizadegan et al., 2022), chemistry (Hueffel et al., 2021) and mathematics (Davies et al., 2021).…”
Section: Data‐driven Methods: the Emergence Of Machine Learningmentioning
confidence: 99%
“…In practice, the first approach to interpret an ML model is to manually verify the correlations between outputs are reasonable (e.g., Rasp & Thuerey, 2021). Following this initial step, there exists a wide variety of more sophisticated methods that can be used to look inside the “black box” of ML models and explain the information hotspots that establish the relationships learned (Mamalakis et al., 2022; McGovern et al., 2019). These methods are often labeled as “explainable AI” (XAI), which can be defined as:…”
Section: Data‐driven Methods: the Emergence Of Machine Learningmentioning
confidence: 99%