2013
DOI: 10.1016/j.neunet.2013.01.020
|View full text |Cite
|
Sign up to set email alerts
|

Learning in compressed space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 17 publications
0
4
0
1
Order By: Relevance
“…In [164], a nonconvex integrated transformed L 1 regularizer applied to the weight matrix space is introduced to remove redundant connections and unnecessary neurons simultaneously. It is shown in [165] that compressing the weights of a layer has the same effect as compressing the input of the layer.…”
Section: Regularization-based Methodsmentioning
confidence: 99%
“…In [164], a nonconvex integrated transformed L 1 regularizer applied to the weight matrix space is introduced to remove redundant connections and unnecessary neurons simultaneously. It is shown in [165] that compressing the weights of a layer has the same effect as compressing the input of the layer.…”
Section: Regularization-based Methodsmentioning
confidence: 99%
“…Reliable and successful solutions come from the fields of RL, [14][15][16][17][18][19][20][21][22][23] optimal control, 24 and black-box optimization. [25][26][27][28][29][30] These approaches often rely on special types of policies for robotic problems, for example, dynamical movement primitives. [31][32][33][34][35] For details on RL in robotics please refer to Kober et al 36 BOLERO provides implementations for several of these algorithms.…”
Section: Background and Related Workmentioning
confidence: 99%
“…1. Previous work of (Koutnik et al, 2010;Fabisch et al, 2013) has already demonstrated that a compression of the parameter space by use of multi layer perceptrons (MLPs) leads to an acceleration of optimization for reinforcement tasks. Reduction of the search spaces by manifolds for value function approximation and abstraction of the whole state-space into sub areas for terrain navigation can be beneficial in case of reinforcement learning.…”
Section: Optimization In Hybrid Spacesmentioning
confidence: 99%