2023
DOI: 10.1109/access.2023.3264636
|View full text |Cite
|
Sign up to set email alerts
|

Fault Diagnosis Method for Imbalanced Data of Rotating Machinery Based on Time Domain Signal Prediction and SC-ResNeSt

Abstract: In an actual engineering environment, some rotating machines are usually in normal operation, but their time in a fault state is very short, which leads to a serious imbalance in the fault diagnosis datasets for rotating machinery, and gives the traditional network model the shortcomings of poor stability and low accuracy in practical engineering applications. To solve this problem, we propose a fault diagnosis method based on the combination of a new Dual-stage Attention-based Recurrent Neural Network (DA-RNN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 48 publications
0
5
0
Order By: Relevance
“…In the equation, s dw and s db represent the weight update and bias update for first-order moment estimation, respectively, r dw and r db represent the weight update and bias update for first-order moment estimation, respectively, β 1 and β 2 are hyperparameters controlling the decay rate of the moving averages, and s 1 and r 1 are bias correction formulas to prevent very small gradients at the beginning of the optimization. Based on Equation (11), the gradient update algorithm for Adam can be summarized as follows:…”
Section: Optimize Network Weightsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the equation, s dw and s db represent the weight update and bias update for first-order moment estimation, respectively, r dw and r db represent the weight update and bias update for first-order moment estimation, respectively, β 1 and β 2 are hyperparameters controlling the decay rate of the moving averages, and s 1 and r 1 are bias correction formulas to prevent very small gradients at the beginning of the optimization. Based on Equation (11), the gradient update algorithm for Adam can be summarized as follows:…”
Section: Optimize Network Weightsmentioning
confidence: 99%
“…For example, Wang et al proposed a fault diagnosis method based on the combination of a new dual-stage attention-based recurrent neural network (DA-RNN) and depth residual dispersion self-calibration convolution network (SC-ResNeSt). To address the issue of the traditional convolution layers lacking a dynamic receptive field, self-calibrated convolution modules were introduced on the basis of the distraction network (ResNeSt), and a new network model, SC-ResNeSt, was established [11]. Chen et al proposed an adaptive multi-channel residual shrinkage network (AMC-RSN), which extracts as many features as possible by constructing an adaptive multi-channel network.…”
Section: Introductionmentioning
confidence: 99%
“…The VS acquired from the CP is complex and non-stationary; therefore, extracting health-sensitive information from these signals can help AI algorithms identify if the health state of CPs requires preprocessing in the temporal domain (TD), time-frequency domain (TFD), and Frequency domain (FD) [13,14]. Wang et al [15] introduced a dualphase approach for SF classification in pump rotating parts. First, to process the VS, the dot products of each time-series pair of VS points are used to create a Gramian matrix.…”
Section: Related Research Studiesmentioning
confidence: 99%
“…The number of fault samples is expanded according to different principles to rebalance the dataset. Examples of these approaches are random multisampling, under-sampling, and data synthesis [34], [35]. The second approach is to optimize the models from the algorithm perspective of it.…”
Section: Introductionmentioning
confidence: 99%