2020
DOI: 10.48550/arxiv.2006.07805
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

Abstract: The transition matrix, denoting the transition relationship from clean labels to noisy labels, is essential to build statistically consistent classifiers in label-noise learning. Existing methods for estimating the transition matrix rely heavily on estimating the noisy class posterior. However, the estimation error for noisy class posterior could be large due to the randomness of label noise. The estimation error would lead the transition matrix to be poorly estimated. Therefore, in this paper, we aim to solve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 18 publications
(36 reference statements)
0
12
0
Order By: Relevance
“…T-Revision [44] proposes a method to ne-tune the estimated transition matrix to improve the classi cation performance; 6). Dual T [49] improves the estimation of the transition matrix by introducing an intermediate class, and then factorizes the transition matrix into the product of two easy-to-estimate transition matrices; 7). VolMinNet [25] is an end-to-end label-noise learning method, which can learn the transition matrix and the classi er simultaneously; 8).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…T-Revision [44] proposes a method to ne-tune the estimated transition matrix to improve the classi cation performance; 6). Dual T [49] improves the estimation of the transition matrix by introducing an intermediate class, and then factorizes the transition matrix into the product of two easy-to-estimate transition matrices; 7). VolMinNet [25] is an end-to-end label-noise learning method, which can learn the transition matrix and the classi er simultaneously; 8).…”
Section: Methodsmentioning
confidence: 99%
“…However, the estimated transition matrix could contain a large estimation error. One reason is that the transition matrix can be hard to accurately estimate when sample size is limited [49]. Another reason is that the assumptions [33,25] used to identify the transition matrix may not be held.…”
Section: Performance With the Biased Transition Matrixmentioning
confidence: 99%
“…Existing methods for LNL can be divided into two broad categories: loss modification and noise detection. The former group includes techniques that account for noise distribution [41,53,60]. Alternatively, the loss itself may be replaced by a more robust version, such as mean absolute error [12], generalized cross-entropy [68 mutual information [58], or a meta-learning objective [28].…”
Section: Related Workmentioning
confidence: 99%
“…The literature has provided discussions of the identifiability of T under the mixture proportion estimation setup (Scott, 2015), and has identified a reducibility condition for inferring the inverse noise rate. Later works have developed a sequence of solutions to estimate T under a variety of assumptions, including irreducibility (Scott, 2015), anchor points (Liu & Tao, 2016;Xia et al, 2019;Yao et al, 2020a), separability (Cheng et al, 2020b), rankability (Northcutt et al, 2017;2021), redundant labels (Liu et al, 2020), clusterability (Zhu et al, 2021c), among others (Zhang et al, 2021;Li et al, 2021).…”
Section: Introductionmentioning
confidence: 99%