2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280723
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study between motivated learning and reinforcement learning

Abstract: This paper analyzes advanced reinforcement learning techniques and compares some of them to motivated learning. Motivated learning is briefly discussed indicating its relation to reinforcement learning. A black box scenario for comparative analysis of learning efficiency in autonomous agents is developed and described. This is used to analyze selected algorithms. Reported results demonstrate that in the selected category of problems, motivated learning outperformed all reinforcement learning algorithms we comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…A motivated agent, like one that uses reinforcement learning, learns how to achieve its goals in dynamically changing environments. Our prior works [28], [29], [7], and [30] demonstrate that motivated learning is more efficient than reinforcement learning in such environments. The environmental graph represents the environment by showing the relationships between resources and actions.…”
Section: Discussionmentioning
confidence: 93%
See 1 more Smart Citation
“…A motivated agent, like one that uses reinforcement learning, learns how to achieve its goals in dynamically changing environments. Our prior works [28], [29], [7], and [30] demonstrate that motivated learning is more efficient than reinforcement learning in such environments. The environmental graph represents the environment by showing the relationships between resources and actions.…”
Section: Discussionmentioning
confidence: 93%
“…The agent's environment, as described in Table 2, is relatively simple. However, as it has been demonstrated in previous works [29] and [7], reinforcement learning systems that lack internally set objectives face difficulties even in such a simple environment. This is because they do not perform well in a non-stationary environment.…”
Section: Experiments With Motivated Agentmentioning
confidence: 96%
“…There exists many RL variants that differ in their complexity (such as Q-learning, SARSA, HRL, and Dyna-Q+) [13]; however, RL approaches suffer from many limitations based on their core deign and assumptions, which we summarize as follows:…”
Section: The Motivation and Learning Problemmentioning
confidence: 99%
“…However, real-world problems have sensory inputs with a potentially infinite number of states; thus, they require approximation of value functions and action policies to be effective. Hence, many variants of RL are introduced to include online algorithms, policy gradient, actor-critic methods, and simulation-based policy iteration [13]. -They tend to learn very slowly, which leads to their poor performance in dynamic environments.…”
Section: The Motivation and Learning Problemmentioning
confidence: 99%
See 1 more Smart Citation