2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01060
|View full text |Cite
|
Sign up to set email alerts
|

Learning From Noisy Anchors for One-Stage Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 93 publications
(35 citation statements)
references
References 28 publications
0
35
0
Order By: Relevance
“…It happens quite often that people annotate inaccurate bboxes in images/videos as the ground truth, even with computer-assisted annotation tools. However, there is little work on the robustness of localization losses to noisy bboxes, even though a number of methods have been proposed for robust learning with noisy labels, anchors, and bboxes [27,26,12,10,15,37,38,25,19,20]. Here, we fill this gap by conducting a set of experiments to evaluate the robustness of different localization losses to noisy bboxes.…”
Section: Robustness To Noisy Bounding Boxesmentioning
confidence: 99%
“…It happens quite often that people annotate inaccurate bboxes in images/videos as the ground truth, even with computer-assisted annotation tools. However, there is little work on the robustness of localization losses to noisy bboxes, even though a number of methods have been proposed for robust learning with noisy labels, anchors, and bboxes [27,26,12,10,15,37,38,25,19,20]. Here, we fill this gap by conducting a set of experiments to evaluate the robustness of different localization losses to noisy bboxes.…”
Section: Robustness To Noisy Bounding Boxesmentioning
confidence: 99%
“…Also based on the ATSS, Kim et al [30] proposes PAA with a new anchor assignment strategy, extending some ideas such as selecting positive samples based on the detection-specific likelihood [43], the statistics of anchor IoUs [28], or the cleanness score of anchors [44,45]. The anchor assignment may consider a flexible number of positive (or negative) not only based on IoU, but also how probable the assignment can argue by the model, in other words, how meaningful the algorithm finds the anchor about the target object (which may not be the highest IoU) to assign it as a positive sample.…”
Section: Object Detection Methodsmentioning
confidence: 99%
“…On MS-COCO test-dev, CentripetalNet not only beat all existing anchor-free detectors with 48.00% AP, but also achieves performance equivalent to the latest instance segmentation method with 40.21% MaskAP. Li et al [12] set a cleanliness score for each anchor. Reduce the impact of noisy sample on classification and positioning to adaptively adjust the importance of each anchor during training.…”
Section: Related Workmentioning
confidence: 99%