Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3547779
|View full text |Cite
|
Sign up to set email alerts
|

Grouped Adaptive Loss Weighting for Person Search

Abstract: Person search is an integrated task of multiple sub-tasks such as foreground/background classification, bounding box regression and person re-identification. Therefore, person search is a typical multi-task learning problem, especially when solved in an end-toend manner. Recently, some works enhance person search features by exploiting various auxiliary information, e.g. person joint keypoints, body part position, attributes, etc., which brings in more tasks and further complexifies a person search model. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 32 publications
(95 reference statements)
0
1
0
Order By: Relevance
“…Most existing person search methods (Xiao et al 2017;Li et al 2022;Yan et al 2022;Kim et al 2021;Li and Miao 2021;Han, Ko, and Sim 2021;Yan et al 2021Yan et al , 2023Tian et al 2022) use ImageNet pre-trained models (Deng et al 2009), such as ResNet50 (He et al 2016), as the initialization model for feature extraction. However, ImageNet pre-training, which learns classification-related knowledge, is limited in its applicability to downstream tasks, particularly when the target task is significantly different (He, Girshick, and Dollar 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Most existing person search methods (Xiao et al 2017;Li et al 2022;Yan et al 2022;Kim et al 2021;Li and Miao 2021;Han, Ko, and Sim 2021;Yan et al 2021Yan et al , 2023Tian et al 2022) use ImageNet pre-trained models (Deng et al 2009), such as ResNet50 (He et al 2016), as the initialization model for feature extraction. However, ImageNet pre-training, which learns classification-related knowledge, is limited in its applicability to downstream tasks, particularly when the target task is significantly different (He, Girshick, and Dollar 2019).…”
Section: Introductionmentioning
confidence: 99%