2020
DOI: 10.1016/j.neucom.2020.02.112
|View full text |Cite
|
Sign up to set email alerts
|

Cross domain knowledge learning with dual-branch adversarial network for vehicle re-identification

Abstract: The widespread popularization of vehicles has facilitated all people's life during the last decades. However, the emergence of a large number of vehicles poses the critical but challenging problem of vehicle re-identification (reID). Till now, for most vehicle reID algorithms, both the training and testing processes are conducted on the same annotated datasets under supervision. However, even a well-trained model will still cause fateful performance drop due to the severe domain bias between the trained datase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(14 citation statements)
references
References 37 publications
(43 reference statements)
0
14
0
Order By: Relevance
“…According to the difference in feature aggregation structure, we can subdivide it into global feature learning methods and local feature learning methods. The global feature learning methods [2,5,21,33,[35][36][37][38][39][40][41][42][43][44][45][46] usually have a spatial global pooling layer to compress the entire vehicle features. However, due to the characteristic of spatial global pooling layers, discriminative local features will inevitably be underestimated, which is detrimental to vehicle re-identification.…”
Section: A Vehicle Re-identificationmentioning
confidence: 99%
“…According to the difference in feature aggregation structure, we can subdivide it into global feature learning methods and local feature learning methods. The global feature learning methods [2,5,21,33,[35][36][37][38][39][40][41][42][43][44][45][46] usually have a spatial global pooling layer to compress the entire vehicle features. However, due to the characteristic of spatial global pooling layers, discriminative local features will inevitably be underestimated, which is detrimental to vehicle re-identification.…”
Section: A Vehicle Re-identificationmentioning
confidence: 99%
“…Challenges lie in re-id under adversarial conditions where vehicles need to be tracked with multi-orientation, multi-scale, multi-resolution values alongside possible occlusion and blur. The vehicle reidentification problem has seen significant work in the past few years due to advances in the general one-shot learning problem [10], [15], [16], [17], [18], [19], [20]. • Event detection: Automated event detection remains a difficult challenge due to the lack of labeled real-world or synthetic data and absence of frameworks for video-based anomaly detection.…”
Section: A Classic Vehicle Tracking Approaches and Research Issuesmentioning
confidence: 99%
“…Conversely, existing vehicle re-id datasets such as VeRi-776 [35] and VeRi-Wild [36] primarily focus on intraclass variability. Current approaches in vehicle re-id attempt to address inter-class similarity and intra-class variability in the same end-to-end model [16], [17], [18], [20]. This creates models that sacrifice performance on solving edge cases in intra-class variability to increase discriminative ability for interclass similarity across the entire data space.…”
Section: B Research Issues In Teamed Classifiersmentioning
confidence: 99%
“…), VAMI + STRZhou and Shao (2018a),MTCRO Xu et al (2018), QD-DLFZhu et al (2019),PAMAL Tumrani et al (2020) and DAN + ATTNet (DAVR)Peng et al (2020), and then, compare them with the proposed DDCL. The unsupervised DAN + ATTNet (DAVR)Peng et al (2020) and the FACT + STRLiu et al (2016d) which combined the traditional features and deep features performed relatively poorly.…”
mentioning
confidence: 99%
“…), VAMI + STRZhou and Shao (2018a),MTCRO Xu et al (2018), QD-DLFZhu et al (2019),PAMAL Tumrani et al (2020) and DAN + ATTNet (DAVR)Peng et al (2020), and then, compare them with the proposed DDCL. The unsupervised DAN + ATTNet (DAVR)Peng et al (2020) and the FACT + STRLiu et al (2016d) which combined the traditional features and deep features performed relatively poorly. While OIFEWang et al (2017), S-CNN + P-LSTMShen et al (2017),RAM Liu et al (2018), VAMI + STRZhou and Shao (2018a),MTCRO Xu et al (2018), QD-DLFZhu et al (2019),PAMAL Tumrani et al (2020) and DAN + ATTNet(DAVR)Peng et al (2020) use softmax loss as a supervising tools to train their models, after removing softmax loss, the experimental results of DDCL are exceed that of these methods.…”
mentioning
confidence: 99%