2020
DOI: 10.1007/978-3-030-58610-2_26
|View full text |Cite
|
Sign up to set email alerts
|

Multi-task Curriculum Framework for Open-Set Semi-supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
75
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(79 citation statements)
references
References 10 publications
1
75
1
Order By: Relevance
“…Their superior results also imply the efficacy of SSL methods on UDA tasks. We note that some works [33,52] presented that SSL methods performed poorly when there was a class distribution shift across labeled and unlabeled data; in contrast, we find that SSL methods perform well when there is a marginal distribution shift (i.e., under the covariate shift assumption 1), facilitating a deeper understanding of their application fields.…”
Section: Strong Uda Baselines With Ssl Methodsmentioning
confidence: 53%
“…Their superior results also imply the efficacy of SSL methods on UDA tasks. We note that some works [33,52] presented that SSL methods performed poorly when there was a class distribution shift across labeled and unlabeled data; in contrast, we find that SSL methods perform well when there is a marginal distribution shift (i.e., under the covariate shift assumption 1), facilitating a deeper understanding of their application fields.…”
Section: Strong Uda Baselines With Ssl Methodsmentioning
confidence: 53%
“…The combination of curriculum learning and semi-supervised learning is popular in recent years [37][38][39]. For multi-model image classification task, [37] optimized the learning process of unlabeled images by judging their reliability and discriminability.…”
Section: Related Workmentioning
confidence: 99%
“…In [38], the easy image-level properties are learned first and then used to facilitate segmentation via constrained CNNs. Curriculum learning is also used to alleviate out-of-distribution problems by picking up in-distribution samples from unlabeled data according to the out-of-distribution scores [39].…”
Section: Related Workmentioning
confidence: 99%
“…Ideally, a model should classify samples of known categories i.e., inliers, into correct classes while identifying samples of novel categories as outliers. This task is called Open-set Semi-supervised Learning (OSSL) [44]. While OSSL is a more realistic and practical scenario than standard SSL, it has not been as widely explored.…”
Section: Introductionmentioning
confidence: 99%