2014
DOI: 10.1016/j.cviu.2013.10.001
|View full text |Cite
|
Sign up to set email alerts
|

Visual object tracking using spatial Context Information and Global tracking skills

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…It has been proven in [ 6 ] that is bounded by [0, s ]. Thus, we can limit the value range of to by dividing s to get the CR features map, which can be described as .…”
Section: The Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It has been proven in [ 6 ] that is bounded by [0, s ]. Thus, we can limit the value range of to by dividing s to get the CR features map, which can be described as .…”
Section: The Proposed Methodsmentioning
confidence: 99%
“…However, traditional DCF trackers such as [ 2 ] are affected by boundary effects, while other improved DCF trackers [ 3 , 4 ] suffer from a large amount of calculations or unsatisfied model updating strategy, which cannot meet the requirements of target tracking when the computing resources are strictly restricted. In order to achieve balance between performance and efficiency of the tracker, we borrow the idea of mean shift tracking algorithms [ 5 , 6 ] to build a new color ratio feature (CR), and propose a DCF-based tracker embedded with CR features, namely CRCF tracker [ 7 ], which can achieve robust performance with real-time speed. Although our previous works have made great progress, nevertheless, a simple moving average update scheme of CRCF tracker cannot deal with the problem of occlusion and large appearance variation, and usually leads to model contamination and tracking drift.…”
Section: Introductionmentioning
confidence: 99%
“…The work in [28] addresses this drawback by employing a pyramidal decomposition to capture distant targets between consecutive frames. An extension of the main algorithm is proposed in [29], which may handle cases where the color of the target is similar with the color of the background and the displacements are large. The disambiguation between target and background is achieved by a model incorporating information about the spatial context of the target and large displacements are handled by increasing the candidate scales.…”
Section: Mean Shift Trackingmentioning
confidence: 99%
“…where r i = number of correctly tracked pixels in frame i number of target pixels in frame i , (29) and the average F-measure:…”
mentioning
confidence: 99%
“…Recently, Li et al combines spatial context inforamtion to meanshift [59]. This algorithm is robustness when target moves fast.…”
Section: Alg2 Finishedmentioning
confidence: 99%