AIAA Scitech 2020 Forum 2020
DOI: 10.2514/6.2020-1846
|View full text |Cite
|
Sign up to set email alerts
|

Design and evaluation of advanced intelligent flight controllers

Abstract: Reinforcement learning based methods could be feasible of solving adaptive optimal control problems for nonlinear dynamical systems. This work presents a proof of concept for applying reinforcement learning based methods to robust and adaptive flight control tasks. A framework for designing and examining these methods is introduced by means of the open research civil aircraft model (RCAM) and optimality criteria. A state-of-the-art robust flight controller -the incremental nonlinear dynamic inversion (INDI) co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Sensory NDI [10,38,39], but incremental NDI as well, replace the dependency on the internal dynamics 𝑓 (𝑥) through the sensor measurements ˆ 𝑥 and û, yielding the implicit control law…”
Section: B Dynamic Inversionmentioning
confidence: 99%
“…Sensory NDI [10,38,39], but incremental NDI as well, replace the dependency on the internal dynamics 𝑓 (𝑥) through the sensor measurements ˆ 𝑥 and û, yielding the implicit control law…”
Section: B Dynamic Inversionmentioning
confidence: 99%
“…This baseline controller is a rate-command, attitude-hold controller based on gain-scheduled proportional and integral feedback, see [3] for details. For the flight experiments on the Cessna Citation, however, an INDI inner loop controller [8,12] and an LPV controller [7] are available. Both have the same input-output interface as the HALE baseline controller, i.e., receiving pitch angle commands from the autopilot and generating the elevator deflection angle 𝛿 𝑒 .…”
Section: Tecs Flight Control Systemmentioning
confidence: 99%
“…In order to reduce the time and workload for identifying model information during the autopilot design process, literature [24] developed a learning-based design method for UAV autopilot using the DDPG algorithm by designing appropriate observation and reward functions. In addition, literature [25] made a comparative analysis of PID neural network controller and DDPG controller and also provided a conceptual proof that reinforcement learning can effectively solve the adaptive optimal control problem of nonlinear dynamic systems. However, most of the methods in [19,20,22,24] and [25] were designed and analyzed based on nonglobal profile state rather than entire flight envelope, which led to insufficient consideration of global constraints and performance indicators.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, literature [25] made a comparative analysis of PID neural network controller and DDPG controller and also provided a conceptual proof that reinforcement learning can effectively solve the adaptive optimal control problem of nonlinear dynamic systems. However, most of the methods in [19,20,22,24] and [25] were designed and analyzed based on nonglobal profile state rather than entire flight envelope, which led to insufficient consideration of global constraints and performance indicators. When there was great uncertainty in the flight environment, the autonomy and robustness of the nominal trajectory tracking guidance mode were poor.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation