2022
DOI: 10.1016/j.neucom.2022.04.017
|View full text |Cite
|
Sign up to set email alerts
|

CIRNet: An improved RGBT tracking via cross-modality interaction and re-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(2 citation statements)
references
References 21 publications
0
1
0
Order By: Relevance
“…In addition, cross-camera target matching is still a difficult problem due to the differences in viewpoints, lighting, and backgrounds between cameras, as well as changes in poses, expressions, and clothing of the targets themselves. In addition, the need to process a large amount of video streaming data in real time places high demands on the computational efficiency of the algorithms [12][13].…”
Section: Introductionmentioning
confidence: 99%
“…In addition, cross-camera target matching is still a difficult problem due to the differences in viewpoints, lighting, and backgrounds between cameras, as well as changes in poses, expressions, and clothing of the targets themselves. In addition, the need to process a large amount of video streaming data in real time places high demands on the computational efficiency of the algorithms [12][13].…”
Section: Introductionmentioning
confidence: 99%
“…Although rich information is preserved, it will bring greater computational costs during tracking. In feature-level fusion tracking, the features of RGB and infrared images are first extracted and then fused according to the designed fusion rules to obtain the fused feature and finally use the fused feature to perform tracking [ 6 , 7 , 8 ]. Decision-level fusion tracking is first performed in individual modalities to obtain tracking results or response maps.…”
Section: Introductionmentioning
confidence: 99%