2023
DOI: 10.1007/s00477-023-02392-6
|View full text |Cite
|
Sign up to set email alerts
|

Snow avalanche susceptibility mapping using novel tree-based machine learning algorithms (XGBoost, NGBoost, and LightGBM) with eXplainable Artificial Intelligence (XAI) approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 100 publications
0
4
0
Order By: Relevance
“…Leaf-aware trees can sometimes cause overfitting, especially with smaller datasets. Limiting shaft depth can help prevent overfitting (Iban & Bilgilioglu, 2023;Zeng et al, 2024).…”
Section: Lighgbmmentioning
confidence: 99%
“…Leaf-aware trees can sometimes cause overfitting, especially with smaller datasets. Limiting shaft depth can help prevent overfitting (Iban & Bilgilioglu, 2023;Zeng et al, 2024).…”
Section: Lighgbmmentioning
confidence: 99%
“…Another approach used for the evaluation of the importance of variables (i.e., conditioning factors) is based on eXplainable Artificial Intelligence (XAI), which allows analyzing the importance of variables with a broader approach [33]. Among the XAI methods are SHAP (SHapley Additive eXplanation) which has been applied in several studies for landslide susceptibility analysis [33][34][35][36] which is based on cooperative game theory (CGP) to explain the prediction results and thus improving the explainability of ML models [35]. SHAP is easy to operate [36] and can be presented graphically [34].…”
Section: Introductionmentioning
confidence: 99%
“…Pradhan et al [27] successfully elucidated the weightings of the internal features and prediction outcomes of a CNN model using SHAP. Iban et al [28] utilized SHAP to elucidate three ensemble learning models, namely XGBoost, NGBoost, and LightGBM. The work of numerous researchers has confirmed that SHAP offers superior interpretability and visual interpretability.…”
Section: Introductionmentioning
confidence: 99%