2022
DOI: 10.1029/2021sw002928
|View full text |Cite
|
Sign up to set email alerts
|

New Findings From Explainable SYM‐H Forecasting Using Gradient Boosting Machines

Abstract: In this work, we develop gradient boosting machines (GBMs) for forecasting the SYM‐H index multiple hours ahead using different combinations of solar wind and interplanetary magnetic field (IMF) parameters, derived parameters, and past SYM‐H values. Using Shapley Additive Explanation values to quantify the contributions from each input to predictions of the SYM‐H index from GBMs, we show that our predictions are consistent with physical understanding while also providing insight into the complex relationship b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
68
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(70 citation statements)
references
References 84 publications
(115 reference statements)
2
68
0
Order By: Relevance
“…Feature importance can be global or local. Global feature importance provides a general picture of the influence of a feature on the model over the entire training set, where a local feature importance determines the feature's contributions to a single prediction (Iong et al., 2022). SHAP is a model agnostic method that determines local feature importance using the game theory approach of Shapley values.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature importance can be global or local. Global feature importance provides a general picture of the influence of a feature on the model over the entire training set, where a local feature importance determines the feature's contributions to a single prediction (Iong et al., 2022). SHAP is a model agnostic method that determines local feature importance using the game theory approach of Shapley values.…”
Section: Methodsmentioning
confidence: 99%
“…This could lead to ineffective or misleading models and could inhibit our ability to make data driven discoveries. In response to the increased use of machine learning in space weather there has been a movement toward using methods more attuned to explainability (such as the many tree based models), or to utilize more in-depth methods of model interpretation (Iong et al, 2022;Reddy et al, 2022) such as SHapley Additive exPlanation (SHAP).…”
Section: Model Explainabilitymentioning
confidence: 99%
“…There has been a growing demand for explainable models in the machine learning (ML) community and as a result, explainable artificial intelligence has been developed as a subfield of ML with the goal of providing results with human‐interpretable explanations (e.g., Lipton, 2018). Indeed, several interpretable models have been developed recently for forecasting geomagnetic indices (e.g., Ayala Solares et al., 2016; Iong et al., 2022). In this paper, we adapt a state‐of‐the‐art feature attribution method called DeepSHAP (Lundberg & Lee, 2017), to explain the behavior of the ORIENT model at a representative electron energy of ∼1 MeV, during a storm time event and a non‐storm time event.…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, several interpretable models have been developed recently for forecasting geomagnetic indices (e.g. Ayala Solares et al, 2016;Iong et al, 2022). In this paper, we adapt a sate-of-the-art feature attribution method called DeepSHAP (Lundberg & Lee, 2017), to explain the behavior of the ORIENT model at a representative electron energy of ∼ 1 MeV, during a storm time event and a non-storm time event.…”
Section: Introductionmentioning
confidence: 99%