2020
DOI: 10.48550/arxiv.2012.07976
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeurIPS 2020 Competition: Predicting Generalization in Deep Learning

Abstract: Understanding generalization in deep learning is arguably one of the most important questions in deep learning. Deep learning has been successfully adopted to a large number of problems ranging from pattern recognition to complex decision making, but many recent researchers have raised many concerns about deep learning, among which the most important is generalization. Despite numerous attempts, conventional statistical learning approaches have yet been able to provide a satisfactory explanation on why deep le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(57 citation statements)
references
References 12 publications
0
57
0
Order By: Relevance
“…However, these data-dependent bounds are far from universally accepted to fully explain the good generalization behavior of overparameterized neural networks; several recent works (Dziugaite and Roy, 2017;Nagarajan and Kolter, 2019) show that they fall short empirically of explaining good generalization behavior. In fact, predicting good generalization behavior in practice was listed as a NeurIPS 2020 challenge (Jiang et al, 2020). Theoretically as well, there are several missing gaps that these techniques do not fill.…”
Section: The Scope Of Data-dependent Generalization Boundsmentioning
confidence: 99%
“…However, these data-dependent bounds are far from universally accepted to fully explain the good generalization behavior of overparameterized neural networks; several recent works (Dziugaite and Roy, 2017;Nagarajan and Kolter, 2019) show that they fall short empirically of explaining good generalization behavior. In fact, predicting good generalization behavior in practice was listed as a NeurIPS 2020 challenge (Jiang et al, 2020). Theoretically as well, there are several missing gaps that these techniques do not fill.…”
Section: The Scope Of Data-dependent Generalization Boundsmentioning
confidence: 99%
“…The PGDL competition (Jiang et al, 2020) was held in NeuRIPS 2020 in an effort to encourage the discovery of empirical generalization measures following the seminal work of Jiang et al (2018). The winner of the PGDL competition Natekar & Sharma (2020) investigated properties of representations in intermediate layers to predict generalization.…”
Section: Predicting Generalization In Deep Learningmentioning
confidence: 99%
“…For every task, the goal is to compute a scalar prediction for each classifier based on its parameters and the training data that correlates as much as possible with the actual generalization errors of the classifiers measured on a test set. The correlation score is measured by the so-called Conditional Mutual Information, which is designed to indicate whether the computed predictions contain all the information about the generalization errors such that knowing the hyper-parameters does not provide additional information, see (Jiang et al, 2020) for details.…”
Section: Evaluation On Pgdl Competitionmentioning
confidence: 99%
See 2 more Smart Citations