2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00211
|View full text |Cite
|
Sign up to set email alerts
|

Tensor feature hallucination for few-shot learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Few-Shot Learning and Generation. Most few-shot learning methods fall into three categories: meta-learning (Finn, Abbeel, and Levine 2017;Oreshkin, López, and Lacoste 2018;Vinyals et al 2016), transfer-learning (Yang, Wang, and Zhu 2022;Zhang et al 2022b;Hu et al 2022), and feature augmentations (Lazarou, Stathaki, and Avrithis 2022;Chen et al 2018;Ye et al 2020). These methods use textual descriptions of novel classes to generate and align images, promoting the effective use of synthetic images in training few-shot learners.…”
Section: Related Workmentioning
confidence: 99%
“…Few-Shot Learning and Generation. Most few-shot learning methods fall into three categories: meta-learning (Finn, Abbeel, and Levine 2017;Oreshkin, López, and Lacoste 2018;Vinyals et al 2016), transfer-learning (Yang, Wang, and Zhu 2022;Zhang et al 2022b;Hu et al 2022), and feature augmentations (Lazarou, Stathaki, and Avrithis 2022;Chen et al 2018;Ye et al 2020). These methods use textual descriptions of novel classes to generate and align images, promoting the effective use of synthetic images in training few-shot learners.…”
Section: Related Workmentioning
confidence: 99%
“…The intrinsic signal brings together clusters of hallucinated and real examples that belong to the same class while pushing apart clusters from different classes. TFH‐ft [31] argues that most hallucination methods focus on generating feature vectors that are commonly obtained by global average pooling on the output feature maps. As a result, spatial details that might be necessary for modelling the underlying data distribution are discarded.…”
Section: Related Workmentioning
confidence: 99%
“…Hallucination Methods. Feature Hallucination of examples is first introduced for visual recognition (Hariharan and Girshick, 2017) by meta-learning (Wang et al, 2018), variational inference (Luo et al, 2021;Lazarou et al, 2022), and adversarial learning Tjio et al, 2022). Label Hallucination (Jian and Torresani, 2022) assigns soft pseudo-labels for unlabelled images to extend the fine-tuning few-shot dataset.…”
Section: Related Workmentioning
confidence: 99%