2013 Joint International Conference on Rural Information &Amp; Communication Technology and Electric-Vehicle Technology (rICT & 2013
DOI: 10.1109/rict-icevt.2013.6741546
|View full text |Cite
|
Sign up to set email alerts
|

Application of reinforcement learning on self-tuning PID controller for soccer robot multi-agent system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…Online tuning of PID controllers has been thoroughly studied and major approaches include fuzzy inference [6], stochastic approximation from data [7], heuristic optimization [8], and their combinations [9]. More recently, machine learning techniques such as reinforcement learning [10] and supervised learning [11] have been applied to tune PID controllers.…”
Section: State Of the Artmentioning
confidence: 99%
“…Online tuning of PID controllers has been thoroughly studied and major approaches include fuzzy inference [6], stochastic approximation from data [7], heuristic optimization [8], and their combinations [9]. More recently, machine learning techniques such as reinforcement learning [10] and supervised learning [11] have been applied to tune PID controllers.…”
Section: State Of the Artmentioning
confidence: 99%
“…It has the advantages of not relying on models and having good learning effects for complex systems. In order to improve the control performance, some scholars combined Q-learning with PID control and proposed many excellent control methods [33,34]. In the autonomous underwater vehicle system, reference [35] proposed a Q-learning PID controller based on RBFNN to improve control performance, in which Q-learning neural network was used to adaptively optimize control parameters.…”
Section: Introductionmentioning
confidence: 99%
“…The combination of fuzzy PID and RL is reported in Boubertakh and Glorennec (2006); Boubertakh et al (2010). Applications of RL-based self-tuning PID include soccer robot El Hakim et al (2013), multicopter Park et al (2019), and human-in-theloop physical assistive control Zhong and Li (2019). For process control, reports in Lawrence et al (2020b,a) consider the PID tuning in sample-by-sample and episodic modes, respectively.…”
Section: Introductionmentioning
confidence: 99%