2021
DOI: 10.48550/arxiv.2108.06152
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Conditional DETR for Fast Training Convergence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(27 citation statements)
references
References 0 publications
0
22
0
Order By: Relevance
“…For example, Deformable DETR (Zhu et al, 2021) directly treats 2D reference points as queries and predicts deformable sampling points for each reference point to perform deformable cross-attention operation. Conditional DETR (Meng et al, 2021) decouples the attention formulation and generates positional queries based on reference coordinates. Efficient DETR (Yao et al, 2021) introduces a dense prediction module to select top-K positions as object queries.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…For example, Deformable DETR (Zhu et al, 2021) directly treats 2D reference points as queries and predicts deformable sampling points for each reference point to perform deformable cross-attention operation. Conditional DETR (Meng et al, 2021) decouples the attention formulation and generates positional queries based on reference coordinates. Efficient DETR (Yao et al, 2021) introduces a dense prediction module to select top-K positions as object queries.…”
Section: Related Workmentioning
confidence: 99%
“…For example, SMCA (Gao et al, 2021) speeds up the training by applying pre-defined Gaussian maps around reference points. Conditional DETR (Meng et al, 2021) uses explicit positional embedding as positional queries for training, yielding attention maps similar to Gaussian kernels as shown in Fig. 4…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations