“…Instead, they train a typical classification network with two blocks: the feature extractor and the classification head. Many FSL models combine backbone with classification head [19,42,48,49,55], detection head [56][57][58], localization head [30] or detection head [15]. We focus on designing the inference stage and improving its performance in transductive and semi-supervised setting.…”
Section: Related Workmentioning
confidence: 99%
“…mini-ImageNet. This dataset [73,77] is widely used in few-shot classification [65,66,71,72,75,76,80]. It contains 100 randomly chosen classes from ImageNet [74].…”
Few-shot learning (FSL) is popular due to its ability to adapt to novel classes. Compared with inductive few-shot learning, transductive models typically perform better as they leverage all samples of the query set. The two existing classes of methods, prototype-based and graph-based, have the disadvantages of inaccurate prototype estimation and sub-optimal graph construction with kernel functions, respectively. In this paper, we propose a novel prototypebased label propagation to solve these issues. Specifically, our graph construction is based on the relation between prototypes and samples rather than between samples. As prototypes are being updated, the graph changes. We also estimate the label of each prototype instead of considering a prototype be the class centre. On mini-ImageNet, tiered-ImageNet, CIFAR-FS and CUB datasets, we show the proposed method outperforms other state-of-the-art methods in transductive FSL and semi-supervised FSL when some unlabeled data accompanies the novel few-shot task.
“…Instead, they train a typical classification network with two blocks: the feature extractor and the classification head. Many FSL models combine backbone with classification head [19,42,48,49,55], detection head [56][57][58], localization head [30] or detection head [15]. We focus on designing the inference stage and improving its performance in transductive and semi-supervised setting.…”
Section: Related Workmentioning
confidence: 99%
“…mini-ImageNet. This dataset [73,77] is widely used in few-shot classification [65,66,71,72,75,76,80]. It contains 100 randomly chosen classes from ImageNet [74].…”
Few-shot learning (FSL) is popular due to its ability to adapt to novel classes. Compared with inductive few-shot learning, transductive models typically perform better as they leverage all samples of the query set. The two existing classes of methods, prototype-based and graph-based, have the disadvantages of inaccurate prototype estimation and sub-optimal graph construction with kernel functions, respectively. In this paper, we propose a novel prototypebased label propagation to solve these issues. Specifically, our graph construction is based on the relation between prototypes and samples rather than between samples. As prototypes are being updated, the graph changes. We also estimate the label of each prototype instead of considering a prototype be the class centre. On mini-ImageNet, tiered-ImageNet, CIFAR-FS and CUB datasets, we show the proposed method outperforms other state-of-the-art methods in transductive FSL and semi-supervised FSL when some unlabeled data accompanies the novel few-shot task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.