The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.1007/s40430-022-03638-0
|View full text |Cite
|
Sign up to set email alerts
|

Remaining useful life prediction for rolling bearings based on similarity feature fusion and convolutional neural network

Abstract: As a critical content of condition-based maintenance (CBM) for mechanical systems, remaining useful life (RUL) prediction of rolling bearing attracts extensive attention to this day. Through mining the bearing degradation rule from operating data, the deep learning method is often used to perform RUL prediction. However, due to the complexity of operating data, it is usually difficult to establish a satisfactory deep learning model for accurate RUL prediction. Thus, a novel convolutional neural network (CNN) p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 42 publications
(38 reference statements)
0
10
0
Order By: Relevance
“…To further demonstrate the superiority of two-path convolutional scaling and attention mechanism, this paper uses CNN [7] (CNN, abbreviated net-1), extended short-term memory network [8] (LSTM, abbreviated net-2), bi-directional extended short-term memory network [9] (BiLSTM, abbreviated net-3) and convolutional extended short-term memory network [13] (CNN-LSTM, abbreviated as net-4) networks, these four network models are used as comparison models for bearing RUL prediction with the two-path convolution-Attention-BiLSTM-Network model in this paper, abbreviated as DABN model below.…”
Section: Comparison Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…To further demonstrate the superiority of two-path convolutional scaling and attention mechanism, this paper uses CNN [7] (CNN, abbreviated net-1), extended short-term memory network [8] (LSTM, abbreviated net-2), bi-directional extended short-term memory network [9] (BiLSTM, abbreviated net-3) and convolutional extended short-term memory network [13] (CNN-LSTM, abbreviated as net-4) networks, these four network models are used as comparison models for bearing RUL prediction with the two-path convolution-Attention-BiLSTM-Network model in this paper, abbreviated as DABN model below.…”
Section: Comparison Experimentsmentioning
confidence: 99%
“…Deep learning has achieved significant results in handling large-scale data and complex pattern recognition in recent years, offering fresh perspectives for solving bearing lifetime prediction problems [6]. Nie et al selected similar features based on the correlation between bearings and time series, feeding them into a convolutional neural network (CNN) to predict bearing life [7]. Yang et al proposed a method based on long short-term memory (LSTM) networks for bearing lifetime prediction, enhancing prediction accuracy by improving the Dropout module during training [8].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature ranking: Feature ranking is crucial in predictive analysis as it allows to identify the most relevant and informative features. Evaluation metrics employed typically encompass assessment of monotonicity and trendability analysis (Carino, Zurita, Delgado, Ortega, & Romero-Troncoso, 2015;Nie, Zhang, Xu, Cai, & Yang, 2022). Moreover, these metrics can be combined with a metric to consider the robustness (Chen, Xu, Wang, & Li, 2019;Zhang, Zhang, & Xu, 2016).…”
Section: Feature Selectionmentioning
confidence: 99%
“…Feature extraction is employed to unveil meaningful and significant underlying information from condition monitoring data. Commonly considered domains for extracting features from data of technical systems are time, frequency and timefrequency domain features (Kimotho & Sextro, 2014;Nie, Zhang, Xu, Cai, & Yang, 2022) (Pearson, 1900). The Chi-Square test statistic can be computed as…”
Section: Feature Extractionmentioning
confidence: 99%