2019
DOI: 10.1007/978-981-15-1398-5_14
|View full text |Cite
|
Sign up to set email alerts
|

Neural Networks as Model Selection with Incremental MDL Normalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…To put this direction in perspective, we mention that this line of work is a series of three. The entry point is [16], where we propose a perspective to understand neural network optimization as a partially observable model selection problem. In our subsequent work [18], we introduce the details of how to approximate the minimum description length (MDL) between neural network layers and demonstrate that using MDL as the regularity information is useful, from an engineering angle, for neural networks to learn from certain input data distributions.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To put this direction in perspective, we mention that this line of work is a series of three. The entry point is [16], where we propose a perspective to understand neural network optimization as a partially observable model selection problem. In our subsequent work [18], we introduce the details of how to approximate the minimum description length (MDL) between neural network layers and demonstrate that using MDL as the regularity information is useful, from an engineering angle, for neural networks to learn from certain input data distributions.…”
Section: Resultsmentioning
confidence: 99%
“…We introduce higher-order simplicial structure as a new summary statistic, and discover that these networks contain an abundance of cliques of single-cell profiles bound into cavities that guide the emergence of more complicated habitation forms [15,19]. To further disentangle information flow, we develop a mathematical filtration technique to compute nerve balls in a dual metric of space and time [20] and a information-theoretical measure among network modules [16,18]. This work aims to solve the following two analytical problems.…”
Section: Introductionmentioning
confidence: 99%
“…In the neural network setting where the optimization process is performed in batches (as incremental data sample with i denoting the batch i ), the model selection process is formulated as a partially observable problem (as in Figure 3 and [ 5 ]). The generative model is the function parameterized by that maps from (input layer ) to (the activations of layer k ) at training time i .…”
Section: Neural Network As Model Selectionmentioning
confidence: 99%
“…In this paper, we adopt a similar definition of implicit space as in [ 4 ], but extend it beyond unsupervised learning, into a generic neural network optimization problem in both supervised and unsupervised settings [ 5 ]. In addition, we consider the formulation and computation of description length differently.…”
Section: Introductionmentioning
confidence: 99%