2019
DOI: 10.48550/arxiv.1909.13032
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Meta R-CNN : Towards General Solver for Instance-level Few-shot Learning

Xiaopeng Yan,
Ziliang Chen,
Anni Xu
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…(1) LSTD [58] proposes a regularization method based on transfer knowledge and background depression regularizations to enhance the fine-tuning effect; (2) Meta YOLO [20] learns feature representation by reweighting module to reassign feature weights; (3) MetaDet [59] solves the problem of few-shot classification and localization simultaneously through a weight prediction meta-model; (4) CME [21] balances the novel class margins by class margin loss and feature interference; (5) Meta R-CNN [25] obtains the class attention vector through the predictor-head remodeling network (PRN) module to remodel the ROI feature; (6) Viewpoint [26] performs efficient feature similarity calculation through feature subtraction; (7) DCNet [27] introduces adaptive context awareness into the feature aggregation module, to gain better global features and local features; (8) FSCN [60] introduces a novel few-shot classification refinement mechanism to improve the final classification; (9) FsDetView+ISAM+QSAM [61] generated an individual prototype for each sample to extract the unique characteristics of each support sample; (10) CAReD [62] maximizes the inter-class distance and minimizes the intra-class distance through contrastive learning.…”
Section: Baseline Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…(1) LSTD [58] proposes a regularization method based on transfer knowledge and background depression regularizations to enhance the fine-tuning effect; (2) Meta YOLO [20] learns feature representation by reweighting module to reassign feature weights; (3) MetaDet [59] solves the problem of few-shot classification and localization simultaneously through a weight prediction meta-model; (4) CME [21] balances the novel class margins by class margin loss and feature interference; (5) Meta R-CNN [25] obtains the class attention vector through the predictor-head remodeling network (PRN) module to remodel the ROI feature; (6) Viewpoint [26] performs efficient feature similarity calculation through feature subtraction; (7) DCNet [27] introduces adaptive context awareness into the feature aggregation module, to gain better global features and local features; (8) FSCN [60] introduces a novel few-shot classification refinement mechanism to improve the final classification; (9) FsDetView+ISAM+QSAM [61] generated an individual prototype for each sample to extract the unique characteristics of each support sample; (10) CAReD [62] maximizes the inter-class distance and minimizes the intra-class distance through contrastive learning.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…Two-stage object detection methods are more suitable for few-shot object detection tasks because of their characteristics, and methods based on Faster R-CNN usually perform better than one-stage methods [63]. However, the proposed BFR approach still outperforms MetaDet [59], Meta R-CNN [25], and Viewpoint [26], which are based on Faster R-CNN. Meanwhile, BFR performs better than DCNet [27], FSCN [60], CAReD [62], etc.…”
Section: Voc Datasetmentioning
confidence: 98%
See 3 more Smart Citations