2022
DOI: 10.48550/arxiv.2201.08984
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PiCO+: Contrastive Label Disambiguation for Robust Partial Label Learning

Abstract: Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL-representation learning and label disambiguation-in one coherent framework. Specifically, our proposed framework PiCO consists of a c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…In the Key View branch, we input the key view into an Encoder and then feed its output to a Projector to obtain the feature embedding k i,j ∈ R d , where both Encoder and Projector are updated through momentum-based methods from the Query View branch. Inspired by MOCO [38] and Pico [42], we maintain a large Embedding Queue to store the feature embeddings of the Key View branch together with the predicted class labels of the corresponding instances. Then, we use the current instance's ŷi,j , q i,j , k i,j , and the Embedding Queue from the previous iteration to perform instance-level weakly supervised contrastive learning (IWSCL).…”
Section: B Framework Overviewmentioning
confidence: 99%
“…In the Key View branch, we input the key view into an Encoder and then feed its output to a Projector to obtain the feature embedding k i,j ∈ R d , where both Encoder and Projector are updated through momentum-based methods from the Query View branch. Inspired by MOCO [38] and Pico [42], we maintain a large Embedding Queue to store the feature embeddings of the Key View branch together with the predicted class labels of the corresponding instances. Then, we use the current instance's ŷi,j , q i,j , k i,j , and the Embedding Queue from the previous iteration to perform instance-level weakly supervised contrastive learning (IWSCL).…”
Section: B Framework Overviewmentioning
confidence: 99%
“…With the explosive research of PLL in the deep learning paradigm, consistency regularized disambiguation-based methods (Wang et al 2022b;Wu, Wang, and Zhang 2022;Wang et al 2022c;Li et al 2023;Xia et al 2023) have achieved significantly better results than other solutions, and have gradually become the mainstream. Such methods usually perturb the samples in the feature space without changing their label semantics, and then use various methods in the label space or representation space to make the outputs of different variants consistent.…”
Section: Introductionmentioning
confidence: 99%