The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2024
DOI: 10.1109/tpwrs.2023.3248941
|View full text |Cite
|
Sign up to set email alerts
|

Using SHAP Values and Machine Learning to Understand Trends in the Transient Stability Limit

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…It is crucial to emphasize that SHAP values can reveal relationships between input features and output outcomes learned from the data, they do not inherently signify or mirror causality. Consequently, operators or domain experts should undertake additional verification using domain knowledge or alternative causal reasoning methods to ascertain the causal effects of the interpretable approach Hamilton and Papadopoulos (2023).…”
Section: The Deep-shap Methods Validationmentioning
confidence: 99%
See 1 more Smart Citation
“…It is crucial to emphasize that SHAP values can reveal relationships between input features and output outcomes learned from the data, they do not inherently signify or mirror causality. Consequently, operators or domain experts should undertake additional verification using domain knowledge or alternative causal reasoning methods to ascertain the causal effects of the interpretable approach Hamilton and Papadopoulos (2023).…”
Section: The Deep-shap Methods Validationmentioning
confidence: 99%
“…Reference Mitrentsis and Lens (2022) uses the feature importance SHAP value method to explain the decision results of photovoltaic power prediction models. Reference Hamilton and Papadopoulos (2023) adopts the feature importance method to obtain the interpretation of machine learning models for location-specific transient stability assessment.…”
Section: Introductionmentioning
confidence: 99%
“…SHAP was utilized because of its 3 advantages. First, SHAP is a local Interpretable Machine Learning (IML) technique that can be adjusted to become constant global explanations [34]. Therefore, explanations of both single operating points and general trend identification can be obtained.…”
Section: E Shapley Additive Explanations (Shap)mentioning
confidence: 99%
“…An interpretable model can serve different purposes. For example, to enhance the trustworthiness of an algorithm's output, to better understand the interactions between input variables and the model's output, and to improve our understanding of the phenomenon under study [28,43].…”
Section: Interpretability In Electricity Price Forecastingmentioning
confidence: 99%