2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00429
|View full text |Cite
|
Sign up to set email alerts
|

Memory Matching Networks for One-Shot Image Recognition

Abstract: In this paper, we introduce the new ideas of augmenting Convolutional Neural Networks (CNNs) with Memory and learning to learn the network parameters for the unlabelled images on the fly in one-shot learning. Specifically, we present Memory Matching Networks (MM-Net) -a novel deep architecture that explores the training procedure, following the philosophy that training and test conditions must match. Technically, MM-Net writes the features of a set of labelled images (support set) into memory and reads from me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
119
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 243 publications
(119 citation statements)
references
References 18 publications
(20 reference statements)
0
119
0
Order By: Relevance
“…Briefly, optimization-based methods usually associated with the concept of meta-learning/learning to learn [7,24], e.g., learning a meta-optimizer [18] or taking some wise optimization strategies [6,28,29], to better and faster update the model for new tasks. Memory-based methods generally introduced memory components to accumulate experience when learning old tasks and generalize them when performing new tasks [3,16,19]. Our experimental results show that our method outperforms them without the need for updating the model for new tasks or introducing complicated memory structure.…”
Section: Related Workmentioning
confidence: 92%
See 2 more Smart Citations
“…Briefly, optimization-based methods usually associated with the concept of meta-learning/learning to learn [7,24], e.g., learning a meta-optimizer [18] or taking some wise optimization strategies [6,28,29], to better and faster update the model for new tasks. Memory-based methods generally introduced memory components to accumulate experience when learning old tasks and generalize them when performing new tasks [3,16,19]. Our experimental results show that our method outperforms them without the need for updating the model for new tasks or introducing complicated memory structure.…”
Section: Related Workmentioning
confidence: 92%
“…2 Trained with 30-way 15 queries per episode task. 3 Our reimplementation of RN [23]. are optimization-based, and the fourth method (MMNets) is memory-based.…”
Section: -Shot 5-shotmentioning
confidence: 99%
See 1 more Smart Citation
“…A prototypical network [22] learns a prototype for each category, so that the examples discriminatively cluster around the prototypes corresponding to each category. Some studies [20], [25] developed memory modules that store useful information from training examples and exploited their memories when testing them. On the contrary, the relationship between activations and weights was established by [21].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, unlike MAML, we do not limit the length of the item-consumption history. We extend the idea of the matching network, which is one of the most famous meta-learning algorithms and shows good performance even when the length of the support set (i.e., length of the item-consumption history) is not fixed [2].…”
Section: Meta-learned User Preference Estimatormentioning
confidence: 99%