2020
DOI: 10.1109/access.2020.3011961
|View full text |Cite
|
Sign up to set email alerts
|

A Mask-Pooling Model With Local-Level Triplet Loss for Person Re-Identification

Abstract: Person Re-Identification (ReID) is an important yet challenging task in computer vision. Background clutter is one of the greatest challenges to overcome. In this paper, we propose a Maskpooling model with local-level triplet loss (MPM-LTL) to tackle this problem and improve person ReID performance. Specifically, we present a novel pooling method, called mask pooling (MP), to gradually remove background features in feature maps through deep convolutional network. With mask pooling, the network can learn the mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…However, the triple loss is affected by the sample distribution, which leads to poor generalization ability of the network. The methods [35,36] are proposed to improve the triplet loss, which has achieved good performance. However, in order to concentrate the positive samples within a certain range, they ignored the internal structure of the samples.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the triple loss is affected by the sample distribution, which leads to poor generalization ability of the network. The methods [35,36] are proposed to improve the triplet loss, which has achieved good performance. However, in order to concentrate the positive samples within a certain range, they ignored the internal structure of the samples.…”
Section: Related Workmentioning
confidence: 99%
“…The methods of MGN [8], CAM [9], GD-Net [10], MSBA [12], BagTricks [14], AGW [15], ABD-Net [16], RAG-SC [17], AANet [22], SONA [24], RRGCCAN [26], IANet [29], EMM [32], MPM-LTL [35], HA-CNN [46] and SCAL [47] are selected for comparison. It is worth noting that we do not use the re-ranking strategy.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%
See 1 more Smart Citation