2021
DOI: 10.48550/arxiv.2103.00243
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Searching for Robustness: Loss Learning for Noisy Classification Tasks

Abstract: We present a "learning to learn" approach for automatically constructing white-box classification loss functions that are robust to label noise in the training data. We parameterize a flexible family of loss functions using Taylor polynomials, and apply evolutionary strategies to search for noise-robust losses in this space. To learn re-usable loss functions that can apply to new tasks, our fitness function scores their performance in aggregate across a range of training dataset and architecture combinations. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…A promising alternative paradigm is to use evolution-based methods to learn M, favoring their inherent ability to avoid local optima via maintaining a population of solutions, their ease of parallelization of computation across multiple processors, and their ability to optimize for non-differentiable functions directly. Examples of such work include [19] and [23], which both represent M as parameterized Taylor polynomials optimized with covariance matrix adaptation evolutionary strategies (CMA-ES). These approaches generate interpretable loss functions, however; they also assume the parametric form via the degree of the polynomial.…”
Section: Evolution-based Approachesmentioning
confidence: 99%
“…A promising alternative paradigm is to use evolution-based methods to learn M, favoring their inherent ability to avoid local optima via maintaining a population of solutions, their ease of parallelization of computation across multiple processors, and their ability to optimize for non-differentiable functions directly. Examples of such work include [19] and [23], which both represent M as parameterized Taylor polynomials optimized with covariance matrix adaptation evolutionary strategies (CMA-ES). These approaches generate interpretable loss functions, however; they also assume the parametric form via the degree of the polynomial.…”
Section: Evolution-based Approachesmentioning
confidence: 99%
“…The weight of each individual classifier used at each step of the presented model can be considered as well [28]. Finally, both the robustness of the TWDBDL model against noise (noise classifications [47,20]) and the impact of various fusion models [5,39] can be studied.…”
Section: Comparison With Existing Prediction Modelsmentioning
confidence: 99%
“…Depending on whether noisy instances are detected in training, the existing LNL methods can be roughly divided into two types. One is to directly train a noise-robust model in the presence of noisy labels (Patrini et al 2017;Wang et al 2019;Ma et al 2020;Lyu and Tsang 2019;Zhou et al 2021;Gao, Gouk, and Hospedales 2021). The other one is to explicitly detect the potential noisy instances, and then learns a model by simply excluding them (Huang et al 2019), or re-using the potential noisy data by estimating the pseudo labels of them (Zhang et al 2018b;Li, Socher, and Hoi 2019;Li, Xiong, and Hoi 2021;Ortego et al 2021).…”
Section: Introductionmentioning
confidence: 99%