2021
DOI: 10.1109/tnnls.2020.3017939
|View full text |Cite
|
Sign up to set email alerts
|

IAUnet: Global Context-Aware Feature Learning for Person Reidentification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 40 publications
(13 citation statements)
references
References 83 publications
0
13
0
Order By: Relevance
“…The model has the capability to capture the local contextual information by stacking multi-scale convolutions in each layer of their architecture. In [13], an integration-aggregation-update (IAU) block has been proposed for improving person reID performance. It introduces a spatial-temporal IAU by combining two different types of contextual information into a CNN model for target feature learning: a) spatial interactions, to capture contextual dependencies between different body parts in a single frame, and b) temporal interactions, to capture contextual dependencies between the same body parts across all frames.…”
Section: A Context-aware Models For Large-scale Image Classificationmentioning
confidence: 99%
“…The model has the capability to capture the local contextual information by stacking multi-scale convolutions in each layer of their architecture. In [13], an integration-aggregation-update (IAU) block has been proposed for improving person reID performance. It introduces a spatial-temporal IAU by combining two different types of contextual information into a CNN model for target feature learning: a) spatial interactions, to capture contextual dependencies between different body parts in a single frame, and b) temporal interactions, to capture contextual dependencies between the same body parts across all frames.…”
Section: A Context-aware Models For Large-scale Image Classificationmentioning
confidence: 99%
“…Video person re-identification. Most existing video person re-id methods [11,19,21,22,45] focus on clothesconsistent setting. Some research [5] demonstrates that appearance feature plays a more important role than motion feature in this setting.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, pixel-wise modeling techniques (Fu et al 2019;Hou et al 2020) may inevitably introduce more background noise in feature maps, making them not suitable for ultrasound image processing.…”
Section: Related Workmentioning
confidence: 99%