2021
DOI: 10.3390/en14041006
|View full text |Cite
|
Sign up to set email alerts
|

Virtual State Feedback Reference Tuning and Value Iteration Reinforcement Learning for Unknown Observable Systems Control

Abstract: In this paper, a novel Virtual State-feedback Reference Feedback Tuning (VSFRT) and Approximate Iterative Value Iteration Reinforcement Learning (AI-VIRL) are applied for learning linear reference model output (LRMO) tracking control of observable systems with unknown dynamics. For the observable system, a new state representation in terms of input/output (IO) data is derived. Consequently, the Virtual State Feedback Tuning (VRFT)-based solution is redefined to accommodate virtual state feedback control, leadi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(25 citation statements)
references
References 48 publications
0
25
0
Order By: Relevance
“…Both 𝒔 and 𝝆 will be replaced by their offline calculated counterparts 𝒔 and 𝝆 following the VSFRT principle. Problem ( 5) is indirectly solved as the next equivalent controller identification problem [3,4]…”
Section: The Model Reference Controlmentioning
confidence: 99%
See 1 more Smart Citation
“…Both 𝒔 and 𝝆 will be replaced by their offline calculated counterparts 𝒔 and 𝝆 following the VSFRT principle. Problem ( 5) is indirectly solved as the next equivalent controller identification problem [3,4]…”
Section: The Model Reference Controlmentioning
confidence: 99%
“…where 𝝅 is the controller function parameter leading to notation 𝒞(𝒔 , 𝝅) (here, the controller can be an NN or other type of approximator) [4,5]. In [4,5], it was motivated why the reference model state 𝒔 should not be included within 𝒔 because the former correlates with 𝝆 .…”
Section: The Model Reference Controlmentioning
confidence: 99%
“…NN-based RL is different from supervised learning, which is a learning method that obtains training information from the environment. In recent years, it has made great progress in the field of intelligent control by combining RL with traditional control methods for reference tracking [25]- [27]. In [25], a novel virtual state-feedback tuning and value iteration RL were applied for learning linear reference model output tracking control of observable systems with unknown dynamics.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, it has made great progress in the field of intelligent control by combining RL with traditional control methods for reference tracking [25]- [27]. In [25], a novel virtual state-feedback tuning and value iteration RL were applied for learning linear reference model output tracking control of observable systems with unknown dynamics. In [26], the author uses RL method to improve the tracking accuracy and robustness of the H 2 control.…”
Section: Introductionmentioning
confidence: 99%
“…Recent RL development for nonlinear control systems has implications for aircraft guidance tasks. A Virtual State-feedback Reference Feedback Tuning (VSFRT) method [ 22 ] was applied to unknown observable systems control. In [ 23 ], a hierarchical soft actor–critic algorithm was proposed for task allocation which significantly improved the efficiency of the intelligent system.…”
Section: Introductionmentioning
confidence: 99%