2018
DOI: 10.1109/access.2018.2875783
|View full text |Cite
|
Sign up to set email alerts
|

Deep Multi-Task Network for Learning Person Identity and Attributes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…They learn pose invariant deep person re-identification features using synthesized images. A deep CNN based method to learn partial descriptive features for efficient person feature representation is presented in [40]. They employed a pyramid spatial pooling module and reported an improvement of 2.71% on the PETA dataset over [28].…”
Section: Related Workmentioning
confidence: 99%
“…They learn pose invariant deep person re-identification features using synthesized images. A deep CNN based method to learn partial descriptive features for efficient person feature representation is presented in [40]. They employed a pyramid spatial pooling module and reported an improvement of 2.71% on the PETA dataset over [28].…”
Section: Related Workmentioning
confidence: 99%
“…They learn pose invariant deep person re-identification features using synthesized images. A deep CNN based method to learn partial descriptive features for efficient person feature representation is presented in [37]. They employed a pyramid spatial pooling module and reported an improvement of 2.71% on the PETA dataset over [25].…”
Section: Related Workmentioning
confidence: 99%
“…They learn pose invariant deep person re-identification features using synthesized images. A deep CNN based method to learn partial descriptive features for efficient person feature representation is presented in [41] . They employed a pyramid spatial pooling module and reported an improvement of 2.71% on the PETA dataset over [29] .…”
Section: Related Workmentioning
confidence: 99%