Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/287
|View full text |Cite
|
Sign up to set email alerts
|

Learning Sparse Representations in Reinforcement Learning with Sparse Coding

Abstract: A variety of representation learning approaches have been investigated for reinforcement learning; much less attention, however, has been given to investigating the utility of sparse coding. Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objectives that result in discriminative representations. In this work, we develop a supervised sparse coding objective for policy evaluation. Despite the non-convexity of this objective, we prove that all local minima a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…This involves penalising changes to weights deemed important for old tasks [14] or enforcing weight or representational sparsity [3] to ensure that only a subset of neurons remain active at any point of time. The latter method has been shown to reduce the possibility of catastrophic interference across tasks [15,26].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This involves penalising changes to weights deemed important for old tasks [14] or enforcing weight or representational sparsity [3] to ensure that only a subset of neurons remain active at any point of time. The latter method has been shown to reduce the possibility of catastrophic interference across tasks [15,26].…”
Section: Related Workmentioning
confidence: 99%
“…Besides aligning gradients, meta-learning algorithms show promise for CL since they can directly use the meta-objective to influence model optimisation and improve on auxiliary objectives like generalisation or transfer. This avoids having to define heuristic incentives like sparsity [15] for better CL. The downside is that they are usually slow and hard to tune, effectively rendering them more suitable for offline continual learning [12,22].…”
Section: Introductionmentioning
confidence: 99%
“…Also, it was determined that SR is the mechanism in the primary visual cortex to achieve concise description of images in terms of features and considered as a main principle to efficiently represent complex data [13], [14]. Furthermore, SR is used to enforce the learning process [15] and to solve optimization problems with non-convex objective function [16]. Hence, SR-based classification (SRC) [9]- [12] methods have proven to be efficient and robust to noise, occlusion, and corruption.…”
Section: Introductionmentioning
confidence: 99%
“…Early work comparing gradient TD algorithms [Maei et al, 2009] used sampled trajectories-2500 of them-but compared to returns, rather than value estimates. For several empirical studies using benchmark domains, like Mountain Car and Acrobot, there are a variety of choices, including t = m = 500 [Gehring et al, 2016]; m = 2000, t = 300 and 1000 length rollouts [Pan et al, 2017]; and m = 5000, t = 5000 [Le et al, 2017]. For a continuous physical system, [Dann et al, 2014] used as little as 10 rollouts from a state.…”
Section: Introductionmentioning
confidence: 99%