2016 IEEE 55th Conference on Decision and Control (CDC) 2016
DOI: 10.1109/cdc.2016.7799164
|View full text |Cite
|
Sign up to set email alerts
|

Actor-critic reinforcement learning for tracking control in robotics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 6 publications
0
11
0
Order By: Relevance
“…• We extend our initial results from Pane et al (2016). Based on the RL-based control input compensator from Pane et al (2016), in this work a novel RLbased method, called the reference compensation method, is developed.…”
Section: Introductionmentioning
confidence: 87%
See 1 more Smart Citation
“…• We extend our initial results from Pane et al (2016). Based on the RL-based control input compensator from Pane et al (2016), in this work a novel RLbased method, called the reference compensation method, is developed.…”
Section: Introductionmentioning
confidence: 87%
“…PD is used as a baseline for model-free, non adaptive control method, while MPC is used as a reference for model-based control framework and finally ILC is chosen as a baseline for model-free and adaptive method. In Pane et al (2016) we only compared against PD.…”
Section: Introductionmentioning
confidence: 99%
“…AC compensators were designed, 120 which was used to reduce tracking error of a multiple DOF industrial robot manipulator. A variety of real-world robotic manipulation tasks, such as dish placement and pouring, used policy optimization to adaptively sample trajectories and effectively to learn good global costs for complex robotic motion skills from user demonstrations.…”
Section: Trajectory and Route Trackingmentioning
confidence: 99%
“…In ILC, the tracking performance is improved by adjusting control inputs or reference signals in repeated trials (Bristow et al, 2006;Schoellig et al, 2012;Tayebi, 2004). In addition to ILC, reinforcement learning (RL)-based approaches have also been proposed to iteratively optimize the tracking performance (Kiumarsi et al, 2014;Pane et al, 2016;Zhang et al, 2016a). Apart from iterative approaches, there are also various works on improving the tracking performance of classical model-based controllers by learning the uncertain or unknown system dynamics with techniques such as Gaussian processes (GPs) (Helwa et al, 2018;Nguyen-Tuong and Peters, 2008), neural networks (NNs) (He et al, 2016;Yan and Wang, 2014), and support vector machines (SVMs) (Iplikci, 2006).…”
Section: Introductionmentioning
confidence: 99%
“…Unlike the iterative learning methods (ILC approaches and some RL-based approaches such as Pane et al (2016)), the proposed DNN approach can be directly used for tracking arbitrary, feasible trajectories without further adaptations during the testing phase, and, consequently, it satisfies the impromptu tracking requirement. Compared to more common approaches (such as forward or inverse dynamic learning) where the learning component typically resides in the main control loop, we use the DNN module as an add-on block that is placed outside of the closed-loop system to improve the tracking performance.…”
Section: Introductionmentioning
confidence: 99%