2021
DOI: 10.1177/09544100211027023
|View full text |Cite
|
Sign up to set email alerts
|

Aerodynamic surrogate model based on deep long short-term memory network: An application on high-lift device control

Abstract: An unsteady aerodynamic surrogate model based on the deep LSTM (long short-term memory) network is proposed for predicting unsteady aerodynamic coefficients. Deflection angles and deflection velocities of control surfaces are introduced to input values of the surrogate model to enhance the capability of identifying different motion states so that accumulative error can be controlled. Longitudinal stability is extremely important for flight safety while few studies have worked on unsteady aerodynamics of airfoi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…At its core, RMSprop aims to dynamically adjust each parameter's learning rate based on its gradients' historical magnitudes [34]. This method of RMSprop solves the common drawback of a static learning rate, which, if set too high, can cause unstable training with erratic oscillations, and, if too low, results in an agonizingly slow convergence [35,36].…”
Section: Rmsprop and Adjusted Learning Rate Methodsmentioning
confidence: 99%
“…At its core, RMSprop aims to dynamically adjust each parameter's learning rate based on its gradients' historical magnitudes [34]. This method of RMSprop solves the common drawback of a static learning rate, which, if set too high, can cause unstable training with erratic oscillations, and, if too low, results in an agonizingly slow convergence [35,36].…”
Section: Rmsprop and Adjusted Learning Rate Methodsmentioning
confidence: 99%
“…At its core, RMSprop aims to dynamically adjust each parameter's learning rate based on its gradients' historical magnitudes [34] (Yu et al, 2020). This method of RMSprop solves the common drawback of a static learning rate, which, if set too high, can cause unstable training with erratic oscillations, and, if too low, results in an agonizingly slow convergence [35,36].…”
Section: Rmsprop and Adjusted Learning Rate Methodsmentioning
confidence: 99%