2023
DOI: 10.3390/app132111893
|View full text |Cite
|
Sign up to set email alerts
|

Remaining Useful Life Estimation of Turbofan Engines with Deep Learning Using Change-Point Detection Based Labeling and Feature Engineering

Kıymet Ensarioğlu,
Tülin İnkaya,
Erdal Emel

Abstract: Accurate remaining useful life (RUL) prediction is one of the most challenging problems in the prognostics of turbofan engines. Recently, RUL prediction methods for turbofan engines mainly involve data-driven models. Preprocessing the sensor data is essential for the performance of the prognostic models. Most studies on turbofan engines use piecewise linear (PwL) labeling, which starts with a constant initial RUL value in normal/healthy operating time. In this study, we designed a prognostic procedure that inc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 64 publications
0
3
0
Order By: Relevance
“…Table 5 shows the comparison results. [56], GCT [36], DCNN [57], ELSTMNN [58], DATCN [59], AGCNN [60], BiLSTM attention model [61], DAST [46], DLformer [37], 1D-CNN-LSTM [62], CNN-LSTM-SAM [63], and BiLSTM-DAE-Transformer [42]. As presented in Table 5, the proposed STAR framework consistently outperforms existing RUL prediction models across all the datasets, showcasing its superior predictive capabilities.…”
Section: Rul Predictionmentioning
confidence: 80%
See 1 more Smart Citation
“…Table 5 shows the comparison results. [56], GCT [36], DCNN [57], ELSTMNN [58], DATCN [59], AGCNN [60], BiLSTM attention model [61], DAST [46], DLformer [37], 1D-CNN-LSTM [62], CNN-LSTM-SAM [63], and BiLSTM-DAE-Transformer [42]. As presented in Table 5, the proposed STAR framework consistently outperforms existing RUL prediction models across all the datasets, showcasing its superior predictive capabilities.…”
Section: Rul Predictionmentioning
confidence: 80%
“…To benchmark its effectiveness, we compare the proposed model against a suite of existing methods widely recognized in the field. These methods include BiLSTM [57], the gated convolutional Transformer (GCT) [36], DCNN [58], ELSTMNN [59], DATCN [60], AGCNN [61], BiLSTM attention model [62], DAST [46], DLformer [37], 1D-CNN-LSTM [63], CNN-LSTM-SAM [64], and BiLSTM-DAE-Transformer [42]. Table 5 shows the comparison results.…”
Section: Rul Predictionmentioning
confidence: 99%
“…Ensarioglu et al [ 20 ] offered a novel strategy for the prediction of RUL involving the creation of features based on differences, labelling through change-point detection and piecewise linear techniques and the utilisation of a hybrid 1D-CNN-LSTM neural network. Li et al [ 21 ] presented a data-driven approach employing a CNN with a time window strategy for sample preparation, aiming to improve feature extraction.…”
Section: Introductionmentioning
confidence: 99%
“…Ensarioglu et al [20] offered a novel strategy for the prediction of RUL involving the creation of features based on differences, labelling through change-point detection and piecewise linear techniques and the utilisation of a hybrid 1D-CNN-LSTM neural network.…”
Section: Introductionmentioning
confidence: 99%
“…In [23], a fusion model named B-LSTM combines a comprehensive learning system for feature extraction with Long Short-Term Memory (LSTM) to handle temporal information in time-series data. Ensario glu et al [24] introduced an innovative approach that incorporated difference-based feature construction, change-point-detection-based piecewise linear labeling, and a hybrid 1D-CNN-LSTM neural network to predict RUL. In [25][26][27], LSTM networks are employed and trained in a supervised manner to directly estimate RUL.…”
Section: Introductionmentioning
confidence: 99%