2021
DOI: 10.1016/j.neucom.2020.06.148
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative feature and dictionary learning with part-aware model for vehicle re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 57 publications
(14 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…The design of existing Re-ID methods is mainly based on handcrafted features [ 14 , 15 ], metric learning [ 16 , 17 , 18 ] and deep learning networks [ 5 , 9 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ]. Some recent approaches learn features at the part level and achieve state-of-the-art performance in Re-ID tasks.…”
Section: Related Workmentioning
confidence: 99%
“…The design of existing Re-ID methods is mainly based on handcrafted features [ 14 , 15 ], metric learning [ 16 , 17 , 18 ] and deep learning networks [ 5 , 9 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ]. Some recent approaches learn features at the part level and achieve state-of-the-art performance in Re-ID tasks.…”
Section: Related Workmentioning
confidence: 99%
“…The field of fine-grained recognition [45,44,46,47] has been widely studied. The early part-based method used additional strong supervision information to identify object categories, such as part annotations and bounding box annotations.…”
Section: Fine-grained Recognitionmentioning
confidence: 99%
“…Li et al [7] proposed a DJDL model that utilized a deep convolutional network to effectively extract discriminative representations for vehicle images. Wang et al [8] proposed the Triplet Center Loss based Part-aware Model (TCPM) that leverages the discriminative features in part details of vehicles to refine the accuracy. Zhou et al [9] learn transformations across different viewpoints of vehicles by the proposed model which is combined with CNN and LSTM.…”
Section: The Status Of Researchmentioning
confidence: 99%