2021
DOI: 10.1016/j.cviu.2021.103270
|View full text |Cite
|
Sign up to set email alerts
|

Learning to teach and learn for semi-supervised few-shot image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
98
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(107 citation statements)
references
References 15 publications
0
98
0
Order By: Relevance
“…Some methods transform the existing training set data and assign pseudo-labels to add new data to the training set [7,86,145]. Another line of strategies [45,123,276] give high-confidence pseudo-labels to unlabeled data sets to turn them into training data. Also, the similar data sets can be used as the source to generate data into the few-shot training set [70,216].…”
Section: Few-shot Learningmentioning
confidence: 99%
“…Some methods transform the existing training set data and assign pseudo-labels to add new data to the training set [7,86,145]. Another line of strategies [45,123,276] give high-confidence pseudo-labels to unlabeled data sets to turn them into training data. Also, the similar data sets can be used as the source to generate data into the few-shot training set [70,216].…”
Section: Few-shot Learningmentioning
confidence: 99%
“…In particular, a number of methods have been proposed to leverage unlabeled data beyond the N examples from the C classes to enhance model accuracy. They either use the unlabeled data in the training (semi-supervised learning) or perform classifications with a set of query data together (transductive learning) [17,12,35,9,27,18]. In this work, we focus on the traditional FSL setting where no additional information or unlabeled data are available and the prediction for a query data point is made independently from (without knowing) any other query data points.…”
Section: Few-shot Learning Problemsmentioning
confidence: 99%
“…Each individual model in our ensemble employs a relation network for classification while the relational network in different models receives different representation as input. Many recent FSL studies proposed approaches that utilized extra unlabled data or other additional information [17,12,35,9,27,18]. They are not in the scope of the problem we were considering and thus not compared in the result section.…”
Section: Related Workmentioning
confidence: 99%
“…Tasks We consider N -way, K-shot classification tasks with N = 5 randomly sampled novel classes and K ∈ {1, 5} examples drawn at random per class as support set S, that is, L = 5K examples in total. For the query set Q, we draw 15 additional examples per class, that is, 75 examples in total, which is the most common choice [39,35,68].…”
Section: Setupmentioning
confidence: 99%