2023
DOI: 10.48550/arxiv.2302.14335
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DC-Former: Diverse and Compact Transformer for Person Re-Identification

Abstract: In person re-identification (re-ID) task, it is still challenging to learn discriminative representation by deep learning, due to limited data. Generally speaking, the model will get better performance when increasing the amount of data. The addition of similar classes strengthens the ability of the classifier to identify similar identities, thereby improving the discrimination of representation. In this paper, we propose a Diverse and Compact Transformer (DC-Former) that can achieve a similar effect by splitt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 38 publications
(50 reference statements)
0
4
0
Order By: Relevance
“…[27] develope a model called Knowledge Refresh and Consolidation (KRC) which enhanced by the introduction of a dynamic memory model and an adaptive working model that enables bidirectional transfer of knowledge and a knowledge consolidation scheme that operates on a dual space. [28] propose a Diverse and Compact Transformer (DC-Former) that can split embedding space into multiple diverse and compact subspaces, and self-diverse constraint (SDC) are imposed on subspaces through multiple class labelling, thus making each embedding space diverse and compact. [29] establish a new benchmark framework called TransReID, in which the transformer framework is employed.…”
Section: Short-term Person Re-identificationmentioning
confidence: 99%
“…[27] develope a model called Knowledge Refresh and Consolidation (KRC) which enhanced by the introduction of a dynamic memory model and an adaptive working model that enables bidirectional transfer of knowledge and a knowledge consolidation scheme that operates on a dual space. [28] propose a Diverse and Compact Transformer (DC-Former) that can split embedding space into multiple diverse and compact subspaces, and self-diverse constraint (SDC) are imposed on subspaces through multiple class labelling, thus making each embedding space diverse and compact. [29] establish a new benchmark framework called TransReID, in which the transformer framework is employed.…”
Section: Short-term Person Re-identificationmentioning
confidence: 99%
“…We compare the performance of MPGA-Net with recent state-of-the-art person ReID methods on CUHK03-NP [31,32], Market1501 [33], DukeMTMC-reID [34], and MSMT17 [35] in Table 1, including methods based on ResNet [7,14,22,23,26,[37][38][39][40][41][42][43][44], self-constructed network [6], neural architecture search [45], and tranformer [20,[46][47][48]. Overall, our proposed MPGA-Net outperforms the state-of-the-art networks or achieves comparable performance.…”
Section: Comparison To State Of the Artmentioning
confidence: 99%
“…Zhu et al [35] proposes embedding a learnable cls_token (CLS) with a single-head auto-alignment structure for accurate human body part representation. DC-Former [36] incorporates a Self-diverse Constraint Layer after attention, enhancing feature vector distances to bolster loss convergence. Moreover, PAT [37] employes self-attention and cross-attention for feature encoding and decoding, facilitating global and local human body feature extraction.…”
Section: Transformermentioning
confidence: 99%