2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00942
|View full text |Cite
|
Sign up to set email alerts
|

Assisted Excitation of Activations: A Learning Technique to Improve Object Detectors

Abstract: Our technique improves the mAP of YOLOv2 by 3.8% and mAP of YOLOv3 by 2.2% on MSCOCO dataset.This technique is inspired from curriculum learning. It is simple and effective and it is applicable to most single-stage object detectors.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…Attention mechanism, a) considering (5–7) the attended feature map is obtained by applying the excitation factor ( α ) in a decreasing rate. The overall structure is similar to the work presented by Derakhshani et al[47] except for the W f and the adaptive excitation factor, b) adaptive excitation factor profile. Using this technique, the excitation factor is reduced when the metric does not improve on validation.…”
Section: Methodsmentioning
confidence: 81%
See 3 more Smart Citations
“…Attention mechanism, a) considering (5–7) the attended feature map is obtained by applying the excitation factor ( α ) in a decreasing rate. The overall structure is similar to the work presented by Derakhshani et al[47] except for the W f and the adaptive excitation factor, b) adaptive excitation factor profile. Using this technique, the excitation factor is reduced when the metric does not improve on validation.…”
Section: Methodsmentioning
confidence: 81%
“…2.a represents a schematic of the AE module. We proposed an adaptive version of the excitation factor instead of a fixed monotonic function [47]. Rather than employing a smooth, monotonically decreasing function (8) to alter the attention coefficient, we degraded the parameter stepwise, after a few epochs with no enhancement in validation accuracy.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…First, our proposal offers a compact model, with a lower number of layers in the network and smaller filters, and the most important point is that we use a specific type of layer that is not present in the original YOLO version using a technique called "Assisted Excitation of Activations" [32]. We also do not use pre-trained weights, as there is no model trained with the characteristics of ours, so we trained our model from the beginning.…”
Section: Implementation Detailsmentioning
confidence: 99%