2018
DOI: 10.1016/j.asoc.2018.09.013
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing long short-term memory recurrent neural networks using ant colony optimization to predict turbine engine vibration

Abstract: This article expands on research that has been done to develop a recurrent neural network (RNN) capable of predicting aircraft engine vibrations using long short-term memory (LSTM) neurons. LSTM RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, making this approach ungeneralizable across multiple engines. In initial work, multiple LSTM RNN a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
44
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 125 publications
(53 citation statements)
references
References 43 publications
(59 reference statements)
0
44
0
1
Order By: Relevance
“…SGD was run with a learning rate η = 0.001, utilizing Nesterov momentum with mu = 0.9. No dropout regularization was used since it has been shown in other work to reduce performance when training RNNs for time series prediction [24]. To prevent exploding gradients, gradient clipping (as described by Pascanu et al [35]) was used when the norm of the gradient was above a threshold of 1.0.…”
Section: B Experimental Designmentioning
confidence: 99%
“…SGD was run with a learning rate η = 0.001, utilizing Nesterov momentum with mu = 0.9. No dropout regularization was used since it has been shown in other work to reduce performance when training RNNs for time series prediction [24]. To prevent exploding gradients, gradient clipping (as described by Pascanu et al [35]) was used when the norm of the gradient was above a threshold of 1.0.…”
Section: B Experimental Designmentioning
confidence: 99%
“…RNN is a network with feedback connections from the hidden and output layers to the preceding ones, by which the dynamics of sequential data can be captured and the memories of the previous patterns are retained via cycles in the network. In the last decade, RNNs have been extensively investigated for a variety of prognostic applications, including engine systems [20]- [23], lithium-ion batteries [24]- [26], rolling element bearings [27]- [30] and fuel cells [31], [32]. Zhang et al [24] utilized a RNN to extract the long-term dependencies underlying in the battery capacity degradation process.…”
Section: Introductionmentioning
confidence: 99%
“…However, BiLSTM does not solve the problem of random initialization of neural network parameters, which will affect the nonlinear learning ability. In view of this shortcoming, the existing methods to optimize the neural network are mainly to improve the error function and excitation function [17]- [19], or to optimize the initial parameters of the network by using intelligent algorithms [20]- [23]. The current intelligent algorithms for parameter optimization mainly include particle swarm optimization (PSO) algorithm [24] and ant colony optimization algorithm, but these algorithms are often prone to the problem of local optimal, which leads to the failure to find the global optimal solution in the process of parameter optimization.…”
Section: Introductionmentioning
confidence: 99%