The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.1109/tcsvt.2021.3139968
|View full text |Cite
|
Sign up to set email alerts
|

Partial Label Learning Based on Disambiguation Correction Net With Graph Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Partial label learning (PLL), also known as superset-label learning Dietterich 2012, 2014) To tackle the mentioned challenge, existing works mainly focus on disambiguation (Feng and An 2019;Nguyen and Caruana 2008;Zhang and Yu 2015;Wang, Zhang, and Li 2022;Fan et al 2021;Xu, Lv, and Geng 2019;Zhang, Wu, and Bao 2022;Qian et al 2023), which can be broadly divided into two categories: averaging-based approaches and identification-based approaches. For the averaging-based approaches (Hüllermeier and Beringer 2005;Cour, Sapp, and Taskar 2011;Zhang and Yu 2015), each candidate label of a training sample is treated equally as the ground-truth one and the final prediction is yielded by averaging the modeling outputs.…”
Section: Related Workmentioning
confidence: 99%
“…Partial label learning (PLL), also known as superset-label learning Dietterich 2012, 2014) To tackle the mentioned challenge, existing works mainly focus on disambiguation (Feng and An 2019;Nguyen and Caruana 2008;Zhang and Yu 2015;Wang, Zhang, and Li 2022;Fan et al 2021;Xu, Lv, and Geng 2019;Zhang, Wu, and Bao 2022;Qian et al 2023), which can be broadly divided into two categories: averaging-based approaches and identification-based approaches. For the averaging-based approaches (Hüllermeier and Beringer 2005;Cour, Sapp, and Taskar 2011;Zhang and Yu 2015), each candidate label of a training sample is treated equally as the ground-truth one and the final prediction is yielded by averaging the modeling outputs.…”
Section: Related Workmentioning
confidence: 99%
“…The video question answering task associates high dimensional samples (videos and questions) and low dimensional labels (answers). Supervised strategies give rise to the possibility that samples could be associated with more than one label, or are improperly labelled [32,33]. Unsupervised learning therefore has an opportunity to shine, with the capability to reduce the sample dimension or identify clusters.…”
Section: Supporting Workmentioning
confidence: 99%