2020 12th International Conference on Advanced Computational Intelligence (ICACI) 2020
DOI: 10.1109/icaci49185.2020.9177637
|View full text |Cite
|
Sign up to set email alerts
|

Critic Only Policy Iteration-based Zero-sum Neuro-optimal Control of Modular and Reconfigurable Robots with uncertain disturbance via Adaptive Dynamic Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Two different control methods are used in the simulations that contain the existing learning-based control scheme, such as An et al (2020), An et al (2021), Dong et al (2019), and the proposed position–force–based zero-sum approximate optimal control method. In the figures, (a) represents the joint 1 as well as (b) denotes the joint 2.…”
Section: Simulationmentioning
confidence: 99%
See 2 more Smart Citations
“…Two different control methods are used in the simulations that contain the existing learning-based control scheme, such as An et al (2020), An et al (2021), Dong et al (2019), and the proposed position–force–based zero-sum approximate optimal control method. In the figures, (a) represents the joint 1 as well as (b) denotes the joint 2.…”
Section: Simulationmentioning
confidence: 99%
“…In order to improve the robustness, a neuro-optimal method is proposed in Kong et al (2021). Zhu et al (2020) developed position-force optimal control of manipulator under uncertain disturbance.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In practical applications, uncertain interference such as continuous environmental contact will result in the deterioration of robot performance. Zero-sum game 23,24 is proposed to solve the approximate optimal control problem. An et al 25 presented a zero-sum game method via policy iteration (PI).…”
Section: Introductionmentioning
confidence: 99%