2018
DOI: 10.1007/s11432-018-9463-x
|View full text |Cite
|
Sign up to set email alerts
|

Missile aerodynamic design using reinforcement learning and transfer learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…Different from directly approximating functions in supervised learning, reinforcement learning does not directly theorize or approximate how people make decisions. There are a limited number of studies of reinforcement learning in the field of fluid dynamics, most of which utilized reinforcement learning for active control problems [20,21], and very few of them attempted shape optimizations [22,23]. The present paper utilizes reinforcement learning for airfoil drag reduction and formulates its policy by interacting with the environment.…”
Section: Iireinforcement Learning For Airfoil Aerodynamic Designmentioning
confidence: 99%
“…Different from directly approximating functions in supervised learning, reinforcement learning does not directly theorize or approximate how people make decisions. There are a limited number of studies of reinforcement learning in the field of fluid dynamics, most of which utilized reinforcement learning for active control problems [20,21], and very few of them attempted shape optimizations [22,23]. The present paper utilizes reinforcement learning for airfoil drag reduction and formulates its policy by interacting with the environment.…”
Section: Iireinforcement Learning For Airfoil Aerodynamic Designmentioning
confidence: 99%
“…Finally, substituting (8) into 7, we now obtain the following game algebraic Riccati equation (GARE):…”
Section: Problem Statementmentioning
confidence: 99%
“…The difficulty of obtaining the feedback Nash equilibrium in (8) lies in the solution to the nonlinear GARE in (9). Moreover, both (8) and (9) are dependent on the knowledge of system dynamics, i.e., A, B 1 , . .…”
Section: Problem Statementmentioning
confidence: 99%
See 1 more Smart Citation
“…To handle complex state spaces and achieve better generalization performance, researchers have proposed the concept of function approximators [1,6,7]. Inspired by the success of deep learning, researchers have applied deep neural networks to the reinforcement learning algorithms [8][9][10][11][12] and achieved impressive results in a wide range of fields such as Atari 2600 [12], non-zero-sum games [13], missile aerodynamic design [14], and music generation [15].…”
Section: Introductionmentioning
confidence: 99%