2021
DOI: 10.3390/s21155200
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data

Abstract: In this study, we proposed a data-driven approach to the condition monitoring of the marine engine. Although several unsupervised methods in the maritime industry have existed, the common limitation was the interpretation of the anomaly; they do not explain why the model classifies specific data instances as an anomaly. This study combines explainable AI techniques with anomaly detection algorithm to overcome the limitation above. As an explainable AI method, this study adopts Shapley Additive exPlanations (SH… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 48 publications
0
8
0
Order By: Relevance
“…SHAP had a solid theoretical basis for achieving both local and global interpretability. The advantage of SHAP value was that it provided us not only SHAP values to evaluate feature importance, and it also showed us the positive or negative effects of the impact ( 37 , 38 ).…”
Section: Methodsmentioning
confidence: 99%
“…SHAP had a solid theoretical basis for achieving both local and global interpretability. The advantage of SHAP value was that it provided us not only SHAP values to evaluate feature importance, and it also showed us the positive or negative effects of the impact ( 37 , 38 ).…”
Section: Methodsmentioning
confidence: 99%
“…The SHAP-XGBoost machine learning model proposed by him was used to explain the CL of industrial CF circuits, providing an accurate multivariate correlation evaluation of the CF datasets. Combining interpretable techniques with anomaly detection algorithms, Kim et al [51] overcame the failure of the model to explain the classification of specific data instances as anomalies, and found that the model could provide more useful explanations in the case of anomaly interpretation using SHAP values.…”
Section: Shapley Additive Explanationmentioning
confidence: 99%
“…In particular, we here used the treeSHAP algorithm, 61 which is specifically optimized for ensemble-based decision tree methods and thus compatible with IF anomaly detection. 62 2.6. Ranking Protocols and Comparison between Different Methods.…”
Section: Nonbinding Pockets As Anomaliesmentioning
confidence: 99%