2020
DOI: 10.1109/jsen.2019.2958787
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Convolutional Neural Network Through Layer-wise Relevance Propagation for Machine Fault Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 85 publications
(32 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…More recently, researchers have begun to focus on understanding DL mechanisms with the aim of facilitating the broad acceptance of the technique. An early work has been reported for DCNN-based motor diagnosis in which the layer-wise relevance propagation (LRP) has been investigated to visualize the frequency band that the DCNN is focused on when distinguishing different motor structural faults [113].…”
Section: Learning Features Frommentioning
confidence: 99%
“…More recently, researchers have begun to focus on understanding DL mechanisms with the aim of facilitating the broad acceptance of the technique. An early work has been reported for DCNN-based motor diagnosis in which the layer-wise relevance propagation (LRP) has been investigated to visualize the frequency band that the DCNN is focused on when distinguishing different motor structural faults [113].…”
Section: Learning Features Frommentioning
confidence: 99%
“…In , CNN is employed with LRP to explain the diagnostic of gearbox failure. Again, in (Grezmak et al, 2020), CNN and LRP are utilize for fault classification and explanation of induction motor. In this work, the vibration time series data used as input is transformed into time-frequency image using Continuous Wavelet Transform (CWT) with Morlet wavelet.…”
Section: Related Literaturementioning
confidence: 99%
“…While ML algorithms, such as neural network (NN) and support vector machine (SVM), have demonstrated high accuracy on several datasets, there is a significant issue called the “black-box” nature of their decision-making and learning processes. Because the learning process of black-box algorithms is neither transparent nor understandable to human operators, high accuracy on a given dataset may be misreading without a deeper understanding of causes from machine-related sensor inputs [ 13 ]. Therefore, interpretable ML-based models that can identify and analyze the root causes of fault detection in manufacturing have drawn attention from researchers.…”
Section: Backgroundsmentioning
confidence: 99%