2018
DOI: 10.1109/tii.2016.2617464
|View full text |Cite
|
Sign up to set email alerts
|

Decoupled Visual Servoing With Fuzzy Q-Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 101 publications
(47 citation statements)
references
References 31 publications
0
46
0
1
Order By: Relevance
“…As visual information has proven its potential in image-based control for autonomous mobile robots [23] and unmanned aerial vehicle [24], [25], the obtained images are employed to represent the state in this paper, which is employed to predict action. The last state s t−1 is served as the input and consists of two parts, as shown in Fig.…”
Section: A State Representationmentioning
confidence: 99%
“…As visual information has proven its potential in image-based control for autonomous mobile robots [23] and unmanned aerial vehicle [24], [25], the obtained images are employed to represent the state in this paper, which is employed to predict action. The last state s t−1 is served as the input and consists of two parts, as shown in Fig.…”
Section: A State Representationmentioning
confidence: 99%
“…, T − 1 is the real-time reward. Equations (8)- (10) gives the updating method of the state-action function.…”
Section: An Improved Q-learning Methods In Semi-markov Decision Processesmentioning
confidence: 99%
“…Reinforcement learning (RL) [9] is an effective machine learning method and the goal of reinforcement learning for an agent is to learn an optimal action strategy and obtain optimal rewards. This year, it has attracted extensive attention from scholars [10][11][12].…”
Section: The Robot Confrontation Systemmentioning
confidence: 99%
“…The intelligent and autonomous agent means that each of these agents operates independently, and by using the reward/penalty that they receive from the environment can serve the main purpose of the game, which is reaching the game's answer. The Q-learning strategy, which is considered, to solve the RL-based game theory problem in this paper, is a model-free and a simple solution mechanism [39]. According to Figure 1, due to the nature of RL that learns the optimal control policy (OCP) by interacting with the environment [40], this paper first performs the offline simulations in which the intelligent agents use trial and error method to extract the OCP.…”
Section: Introductionmentioning
confidence: 99%