2022
DOI: 10.1109/tim.2022.3160561
|View full text |Cite
|
Sign up to set email alerts
|

Dual-Aspect Self-Attention Based on Transformer for Remaining Useful Life Prediction

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 68 publications
(34 citation statements)
references
References 32 publications
1
26
0
Order By: Relevance
“…For RUL prediction, Ref. [ 129 ] propose a transformer-based encoder–decoder structure with a dual-aspect encoders design to extract features from the sensor and time step simultaneously, while adaptively learning to focus on more important part of input and processing long data sequences.…”
Section: Part Ii: Supervised DL Methods For Intelligent Industrial Fdpmentioning
confidence: 99%
“…For RUL prediction, Ref. [ 129 ] propose a transformer-based encoder–decoder structure with a dual-aspect encoders design to extract features from the sensor and time step simultaneously, while adaptively learning to focus on more important part of input and processing long data sequences.…”
Section: Part Ii: Supervised DL Methods For Intelligent Industrial Fdpmentioning
confidence: 99%
“…The Transformer architecture has recently become a new option for predictive maintenance [93,148,180]. For example, the Dual Aspect Self-attention based on Transformer (DAST) uses the Transformer architecture as a self-attention model without using any of the common model choices (e.g., CNN or RNN) for prediction, showing a wide range of options for customization [54,158]. Ma et al [53] created a variation of the G-Transformer model architecture that uses the encoder from traditional Transformer models as it is applied to natural language processing for sampling and extracting features for PM.…”
Section: ) Transformermentioning
confidence: 99%
“…In 2017, Google Brain proposed a new sequence modeling architecture called a Transformer to process variable-length inputs as an alternative to CNNs or RNNs [53]. Transformers utilize a multi-head self-attention process to extract long-term dependencies in a sequence with no regard for distance [54]. This approach allows the model to be more resilient against sequence-length increases and avoid using recurrence or convolution methods [54,55].…”
Section: ) Deep Neural Networkmentioning
confidence: 99%