2018
DOI: 10.1109/tnnls.2018.2790479
|View full text |Cite
|
Sign up to set email alerts
|

SGD-Based Adaptive NN Control Design for Uncertain Nonlinear Systems

Abstract: In this paper, a stochastic gradient descent (SGD)-based adaptive neural network (NN) control scheme is presented for a class of uncertain nonlinear systems. The introduction of the SGD algorithm results in a better tracking performance compared with some other adaptive NN methods without using SGD. This is because the proposed SGD-based adaptive NN control strategy provides optimization algorithms for the weights, the widths, and the centers of the NNs, which can achieve a good function approximation performa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(11 citation statements)
references
References 27 publications
0
11
0
Order By: Relevance
“…In order to address the above problem, the first group of studies is directed towards different SGD modifications such as SGD with adopting extended differentiators [61], random reshuffling [62], local changes in gradients [63], [64] and etc.…”
Section: A Adaptive Optimizationmentioning
confidence: 99%
“…In order to address the above problem, the first group of studies is directed towards different SGD modifications such as SGD with adopting extended differentiators [61], random reshuffling [62], local changes in gradients [63], [64] and etc.…”
Section: A Adaptive Optimizationmentioning
confidence: 99%
“…In order to address the above problem, the first group of studies is directed towards different SGD modifications such as SGD with adopting extended differentiators [62], random reshuffling [63], local changes in gradients [64], [65] and etc.…”
Section: A Adaptive Optimizationmentioning
confidence: 99%
“…Besides, discrete variables in vector {I, B, H} are relaxed to continuous variables. Variables i m , b m , and h g are updated according to the SGD method in [41] to minimize the loss function.…”
Section: Function Approximation Based Algorithm For Large-scale Networkmentioning
confidence: 99%