2022
DOI: 10.1016/j.ress.2022.108618
|View full text |Cite
|
Sign up to set email alerts
|

Global contextual residual convolutional neural networks for motor fault diagnosis under variable-speed conditions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 58 publications
(17 citation statements)
references
References 30 publications
0
17
0
Order By: Relevance
“…To prove the effectiveness of the proposed ASG-HOMGATbased fault diagnosis method, the standard graph convolutional neural network (GCN) [33], the multiple-receptive field graph convolutional neural network (MRF-GCN) [24], the graph attention network (GATv2) [41], the GraphSAGE [42], the three kinds of Chebyshev graph convolutional networks with receptive fields k of 1,2,3, respectively [43], the GC-ResCNN [44], the ASG-HOGCN (not using multi-head attention mechanism network) for comparative experiments, these models are able to capture the structural relationship between samples and achieve effective fault diagnosis, GCN can effectively aggregate the information of neighboring nodes in the graph structure through spectral convolution, which in turn improves the ability of fault feature extraction, MRF-GCN helps the model to analyze the fault features in a more comprehensive way by converting the data samples into a weighted graph and learning the feature representations from multiple neighborhoods, GATv2 uses the attention mechanism combined with assigning different weights to neighboring node in the graph-structured data, GraphSAGE generates new feature representations for each node by learning an aggregation function to aggregate the neighboring features of the nodes, Chebynet by using chebyshev polynomial expansions of different orders, ChebyNet can flexibly handle graphs with different sensory wild sizes of the data and enables efficient graph convolution operations through approximate computation.…”
Section: Domain Adaptability Of Variable Loadsmentioning
confidence: 99%
“…To prove the effectiveness of the proposed ASG-HOMGATbased fault diagnosis method, the standard graph convolutional neural network (GCN) [33], the multiple-receptive field graph convolutional neural network (MRF-GCN) [24], the graph attention network (GATv2) [41], the GraphSAGE [42], the three kinds of Chebyshev graph convolutional networks with receptive fields k of 1,2,3, respectively [43], the GC-ResCNN [44], the ASG-HOGCN (not using multi-head attention mechanism network) for comparative experiments, these models are able to capture the structural relationship between samples and achieve effective fault diagnosis, GCN can effectively aggregate the information of neighboring nodes in the graph structure through spectral convolution, which in turn improves the ability of fault feature extraction, MRF-GCN helps the model to analyze the fault features in a more comprehensive way by converting the data samples into a weighted graph and learning the feature representations from multiple neighborhoods, GATv2 uses the attention mechanism combined with assigning different weights to neighboring node in the graph-structured data, GraphSAGE generates new feature representations for each node by learning an aggregation function to aggregate the neighboring features of the nodes, Chebynet by using chebyshev polynomial expansions of different orders, ChebyNet can flexibly handle graphs with different sensory wild sizes of the data and enables efficient graph convolution operations through approximate computation.…”
Section: Domain Adaptability Of Variable Loadsmentioning
confidence: 99%
“…The study conducted by Ribeiro et al [6] suggests using Short Time Fourier Transform (STFT) in a CNN to extract information from vibration signals . A global context residual convolutional neural network was developed by Xu et al [7] . The aim was to find a suitable global context module that would enhance focus on global discriminative features within the model.…”
Section: Stage 2: Improved Cnn For Motor Fault Diagnosismentioning
confidence: 99%
“…Then, backpropagation is used to continuously adjust the model weight parameters according to the objective function to find the weight parameters that are most suitable for the target task. As shown in figure 1, the CNN includes a convolution layer, pooling layer and fully connected (FC) layer [19,20], and the corresponding functions of these layers are convolution, downsampling and full connection, respectively.…”
Section: Cnnmentioning
confidence: 99%