2022
DOI: 10.1049/cvi2.12128
|View full text |Cite
|
Sign up to set email alerts
|

Multi‐template temporal information fusion for Siamese object tracking

Abstract: The object tracking algorithm based on Siamese network often extracts the deep feature of the target to be tracked from the first frame of the video sequence as a template, and uses the template for the whole tracking process. Because the manually annotated target in the first frame of video sequence is more accurate, these algorithms often have stable performance. However, it is difficult to adapt to the changing target features only using the target template extracted from the first frame. Inspired by the fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Tables 5 and 6 report the AUC scores and precision scores compared with SiamFC [2], SiamRPN++, MemTrack [52], CFNet [53], hierarchical feature transformer (HiFT) [54], SiamRPN [6], ADNet [55], SiamCAR, VITAL [56], MTFM [57], and DP‐Siam [58] on the UAV123 and UAV20L datasets. As can be seen from Table 5, our method ranks first in both precision score and AUC score.…”
Section: Methodsmentioning
confidence: 99%
“…Tables 5 and 6 report the AUC scores and precision scores compared with SiamFC [2], SiamRPN++, MemTrack [52], CFNet [53], hierarchical feature transformer (HiFT) [54], SiamRPN [6], ADNet [55], SiamCAR, VITAL [56], MTFM [57], and DP‐Siam [58] on the UAV123 and UAV20L datasets. As can be seen from Table 5, our method ranks first in both precision score and AUC score.…”
Section: Methodsmentioning
confidence: 99%
“…Visual object tracking is an important research direction in computer vision, and it is widely employed in the fields of autonomous driving, intelligent security, and robot motion [1,2]. In the tracking task, the initial target is first provided and then localised in subsequent frames of the video.…”
Section: Introductionmentioning
confidence: 99%