2022
DOI: 10.48550/arxiv.2201.05610
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

When less is more: Simplifying inputs aids neural network understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Devising strategies for data pruning and constructing optimal subsets is a recent topic of interest in the area of optimization and active learning (Dong et al, 2019;Kaushal et al, 2019;Saadatfar et al, 2020;Durga et al, 2021;Kothawade et al, 2021;Killamsetty et al, 2021;Paul et al, 2021;Kothyari et al, 2021;Ahia et al, 2021). A few studies have examined the training landscape for drawing clues about the optimal subset creation (Toneva et al, 2018;Agarwal et al, 2020;Baldock et al, 2021;Paul et al, 2021;Schirrmeister et al, 2022). The work on coresets (Tolochinsky & Feldman, 2018;Huang et al, 2021;Jiang et al, 2021;Jubran et al, 2021;Mirzasoleiman et al, 2020) is also being actively researched in the regime of optimization and active learning.…”
Section: Related Workmentioning
confidence: 99%
“…Devising strategies for data pruning and constructing optimal subsets is a recent topic of interest in the area of optimization and active learning (Dong et al, 2019;Kaushal et al, 2019;Saadatfar et al, 2020;Durga et al, 2021;Kothawade et al, 2021;Killamsetty et al, 2021;Paul et al, 2021;Kothyari et al, 2021;Ahia et al, 2021). A few studies have examined the training landscape for drawing clues about the optimal subset creation (Toneva et al, 2018;Agarwal et al, 2020;Baldock et al, 2021;Paul et al, 2021;Schirrmeister et al, 2022). The work on coresets (Tolochinsky & Feldman, 2018;Huang et al, 2021;Jiang et al, 2021;Jubran et al, 2021;Mirzasoleiman et al, 2020) is also being actively researched in the regime of optimization and active learning.…”
Section: Related Workmentioning
confidence: 99%
“…(Vicol et al, 2022) studies the implicit bias between warm start vs. cold start of the bilevel optimization under the meta-model matching dataset distillation approaches. (Schirrmeister et al, 2022) shows that dataset distillation methods can be regularized towards a simpler dataset using a pre-trained generative model. (Maalouf et al, 2023) provides theoretical support for the existence of small distilled dataset in the context of kernel ridge regression models.…”
Section: Introductionmentioning
confidence: 99%