2019
DOI: 10.1002/rnc.4541
|View full text |Cite
|
Sign up to set email alerts
|

Efficient learning from adaptive control under sufficient excitation

Abstract: Summary Parameter convergence is desirable in adaptive control as it enhances the overall stability and robustness properties of the closed‐loop system. In existing online historical data (OHD)–driven parameter learning schemes, all OHD are exploited to update parameter estimates such that parameter convergence is guaranteed under a sufficient excitation (SE) condition which is strictly weaker than the classical persistent excitation condition. Nevertheless, the exploitation of all OHD not only results in poss… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
64
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 29 publications
(75 citation statements)
references
References 45 publications
0
64
0
2
Order By: Relevance
“…The reference output x d is generated by where x c = ∕3 at t ∈ [5, 10] ∪ [35,40] seconds, x c = − ∕3 at t ∈ [15,20][45,50] seconds, and x c = 0 for other time. It is clear that the x d generated by the aforementioned model includes two tasks that are the same, and it does not satisfy the partial PE condition in Lemma 2.…”
Section: Illustrative Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The reference output x d is generated by where x c = ∕3 at t ∈ [5, 10] ∪ [35,40] seconds, x c = − ∕3 at t ∈ [15,20][45,50] seconds, and x c = 0 for other time. It is clear that the x d generated by the aforementioned model includes two tasks that are the same, and it does not satisfy the partial PE condition in Lemma 2.…”
Section: Illustrative Resultsmentioning
confidence: 99%
“…26 However, in the existing NNLC methods, the necessity that the trajectory of NN inputs is recurrent is still stringent in practice, and the parameter convergence rate highly depends on PE levels, which generally gives rise to a slow parameter convergence speed. [31][32][33][34][35][36][37] The difference of the composite learning compared with the composite adaptation is that online historical data are employed to construct prediction errors so that closed-loop exponential stability is ensured by an interval excitation (IE) condition, which greatly relaxes the PE condition. [28][29][30] Motivated by the composite adaptation, an emerging composite learning technique was proposed to achieve parameter convergence in adaptive control at the absence of PE.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Remark Composite learning achieves exponential parameter convergence in adaptive control without the deficiencies of the concurrent learning in Remark , rather than gradient‐based estimation laws, is developed to guarantee exponential parameter convergence; (ii) the indirect adaptive scheme , rather than direct adaptive schemes, is applied to update truebold-italicθ^ in the control law ; and (iii) the usage of trueẋn is not necessary due to the interval integral action in and the integral transformation.…”
Section: Incorporate With Indirect Adaptive Controlmentioning
confidence: 99%