2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00744
|View full text |Cite
|
Sign up to set email alerts
|

Balanced and Hierarchical Relation Learning for One-shot Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 18 publications
0
15
0
Order By: Relevance
“…In Table 2, we compare our method with two‐branch detectors including Meta R‐CNN [29], Attention‐RPN [46], FsDetView [39], Dense Relation Distillation with Context‐aware Aggregation Network [31], CME [14], Transformation Invariant Principle [47], Meta‐DETR [36], Few‐Shot Object Detection with Universal Prototypes [11], Query Adaptive Few‐Shot Object Detection [48], Generate Detectors [49], Meta Faster R‐CNN [50], Intra‐Support Attention Module and the Query‐Support Attention Module [51], CAReD [32] and Kernelized Few‐Shot Object Detection [52], which are mata‐learning‐based methods and single‐branch detectors including TFA [33], MPSR [30], Semantic Relation Reasoning for Shot‐Stable Few‐Shot Object Detection [53], FSCE [13], Cooperating Region Proposal Network’s (CoRPNs) + Hallucination [54], Singular Value Decomposition [55], Few‐Shot Object Detection via Association and Discrimination [56], Decoupled Faster Region based Convolutional Neural Network [57] and TeSNet [35], which are fine‐tuning‐based methods. And it can be seen that our method has a great improvement over other state‐of‐the‐art methods.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In Table 2, we compare our method with two‐branch detectors including Meta R‐CNN [29], Attention‐RPN [46], FsDetView [39], Dense Relation Distillation with Context‐aware Aggregation Network [31], CME [14], Transformation Invariant Principle [47], Meta‐DETR [36], Few‐Shot Object Detection with Universal Prototypes [11], Query Adaptive Few‐Shot Object Detection [48], Generate Detectors [49], Meta Faster R‐CNN [50], Intra‐Support Attention Module and the Query‐Support Attention Module [51], CAReD [32] and Kernelized Few‐Shot Object Detection [52], which are mata‐learning‐based methods and single‐branch detectors including TFA [33], MPSR [30], Semantic Relation Reasoning for Shot‐Stable Few‐Shot Object Detection [53], FSCE [13], Cooperating Region Proposal Network’s (CoRPNs) + Hallucination [54], Singular Value Decomposition [55], Few‐Shot Object Detection via Association and Discrimination [56], Decoupled Faster Region based Convolutional Neural Network [57] and TeSNet [35], which are fine‐tuning‐based methods. And it can be seen that our method has a great improvement over other state‐of‐the‐art methods.…”
Section: Resultsmentioning
confidence: 99%
“…By contrast, humans can easily remember and recognise a kind of object through a few samples. Therefore, researchers have proposed various methods [8][9][10][11] to address the few-shot problem in object detection. These methods can be roughly grouped into two categories: meta-learning-based methods and fine-tuningbased methods.…”
Section: Introductionmentioning
confidence: 99%
“…Loss Function. For the second stage, the overall loss for training our model can be expressed by: are the ratio-preserving losses defined in BHRL (Yang et al 2022) for evaluating the coarse novelclass and final classification results, respectively. G denotes the ground-truth label.…”
Section: Base-class Suppressionmentioning
confidence: 99%
“…In recent years, the field of few-shot object detection has thrived (Fan et al 2020a;Han et al 2022), in which prevalent approaches often incorporate transfer-learning, meta-learning, and metric-learning to deal with the task. Most existing works in OSOD adopt the metric-learning paradigm and recognize new objects based on the similarity metrics between image pairs without finetuning (Hsieh et al 2019;Zhang et al 2022b;Yang et al 2022). However, they are generally dedicated to exploring effective cross-image feature correlation to better use the limited information, neglecting the phenomenon of the model bias towards the base classes and the generalization degradation on the novel classes (Fan et al 2021;Lang et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Only with few images per new object, this approach can find unseen objects 9,10 . Indeed, the literature [11][12][13][14][15] lays the foundations of the LSOD task. Unlike conventional object detectors, this task refers to the classification and detection of objects with few information.…”
Section: Introductionmentioning
confidence: 99%