2021
DOI: 10.1007/s10845-021-01750-x
|View full text |Cite
|
Sign up to set email alerts
|

Remaining useful life estimation via transformer encoder enhanced by a gated convolutional unit

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 153 publications
(56 citation statements)
references
References 35 publications
0
40
0
Order By: Relevance
“…The network has 95k trainable parameters (). This final architecture is the result of conducting a grid search wherein the search space over the hyperparameters includes: number of hidden layers [1][2][3][4], number of neurons at each hidden layer [50,100,200], and activation function type [tanh, relu].…”
Section: Deep Learning Prognostics Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The network has 95k trainable parameters (). This final architecture is the result of conducting a grid search wherein the search space over the hyperparameters includes: number of hidden layers [1][2][3][4], number of neurons at each hidden layer [50,100,200], and activation function type [tanh, relu].…”
Section: Deep Learning Prognostics Modelmentioning
confidence: 99%
“…The network has 24k trainable parameters (). As with the FNN, the final architecture is the result of conducting a grid search over the following hyperparameters: number of hidden layers [1][2][3][4], number of channels [10,20,30] at each convolutional layer, filter size [10,20], number of neurons at the fully connected layer [50,100], activation function type [tanh, relu], and window size of the sliding window [20,50,200].…”
Section: Deep Learning Prognostics Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…where I(condition) is an indicator function, which returns 1 if condition is satisfied, and 0, otherwise, and α ≥ 0 is a Laplace smoothing parameter to prevent Pr M 1 are selected with probabilities in ( 8) and ( 9), respectively, these probabilities should be normalized by dividing each by their sum as presented in Equations ( 10) and (11):…”
Section: Generating Alphabetical Sequencesmentioning
confidence: 99%
“…These models learn the degradation patterns and relationships between the pattern and the RUL from the data collected until the end of the life of the components and predict the RUL of target components. In many recent studies, ML and DL models have shown superior performance for RUL problems [8][9][10][11][12][13][14][15].…”
Section: Introductionmentioning
confidence: 99%