2017
DOI: 10.3233/ifs-162212
|View full text |Cite
|
Sign up to set email alerts
|

A fuzzy-based function approximation technique for reinforcement learning1

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…Compared to Q-learning, Sarsa-learning can select the action to follow the strategy and update the action value function to follow the same strategy. Q-learning way can simplify the execution of algorithm analysis and convergence difficulty, but Sarsa-learning has a higher learning efficiency and faster convergence rate [50]. The algorithm employs a simple update process for value iteration.…”
Section: Sarsa: Online Control For Cwsnmentioning
confidence: 99%
“…Compared to Q-learning, Sarsa-learning can select the action to follow the strategy and update the action value function to follow the same strategy. Q-learning way can simplify the execution of algorithm analysis and convergence difficulty, but Sarsa-learning has a higher learning efficiency and faster convergence rate [50]. The algorithm employs a simple update process for value iteration.…”
Section: Sarsa: Online Control For Cwsnmentioning
confidence: 99%