2018 IEEE International Conference on Multimedia and Expo (ICME) 2018
DOI: 10.1109/icme.2018.8486581
|View full text |Cite
|
Sign up to set email alerts
|

Robust Structured Multi-Task Multi-View Sparse Tracking

Abstract: Sparse representation is a viable solution to visual tracking. In this paper, we propose a structured multi-task multi-view tracking (SMTMVT) method, which exploits the sparse appearance model in the particle filter framework to track targets under different challenges. Specifically, we extract features of the target candidates from different views and sparsely represent them by a linear combination of templates of different views. Unlike the conventional sparse trackers, SMTMVT not only jointly considers the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 31 publications
(39 reference statements)
0
6
0
Order By: Relevance
“…This benchmark consists of 50 annotated sequences, where 49 sequences have one annotated target and one sequence (jogging) has two annotated targets [39]. We evaluate the overall performance of the proposed STLDF and its two variants (i.e., STLCF and STLHF) against 29 baseline trackers in Reference [39] and 17 recent trackers including MTMVTLS [43], MTMVTLAD [13], MSLA [14] (the recent version of ASLA [34]), SST [5], SMTMVT [44], CNT [21], two variants of TGPR (i.e., TGPR_Color and TGPR_HOG) [45], DSST [46], PCOM [47], KCF [19], MEEM [48], SAMF [49], SRDCF [50], STAPLE [51], and two variants of RSST (i.e., RSST_HOG and RSST_Deep) [23]. We present the overall OPE success and precision plots in Figure 2.…”
Section: Experimental Results On Otb50mentioning
confidence: 99%
“…This benchmark consists of 50 annotated sequences, where 49 sequences have one annotated target and one sequence (jogging) has two annotated targets [39]. We evaluate the overall performance of the proposed STLDF and its two variants (i.e., STLCF and STLHF) against 29 baseline trackers in Reference [39] and 17 recent trackers including MTMVTLS [43], MTMVTLAD [13], MSLA [14] (the recent version of ASLA [34]), SST [5], SMTMVT [44], CNT [21], two variants of TGPR (i.e., TGPR_Color and TGPR_HOG) [45], DSST [46], PCOM [47], KCF [19], MEEM [48], SAMF [49], SRDCF [50], STAPLE [51], and two variants of RSST (i.e., RSST_HOG and RSST_Deep) [23]. We present the overall OPE success and precision plots in Figure 2.…”
Section: Experimental Results On Otb50mentioning
confidence: 99%
“…We compare DF-SGLST with its two variants (SGLST Color and SGLST HOG), 29 baseline trackers in [42], and 17 recent trackers including MTMVTLS [18], MTMVTLAD [8], MSLA [20] (the recent version of ASLA [9]), SST [5], SMTMVT [46], CNT [34], two variants of TGPR (i.e., TGPR Color and TGPR HOG) [47], DSST [30], PCOM [48], KCF [28], MEEM [49], SAMF [50], SRDCF [51], STAPLE [52], and two variants of RSST (i.e., RSST HOG and RSST Deep) [10]. Following the protocol proposed in [42], we use the same parameters on all the sequences to obtain the one-pass evaluation (OPE) results, which are conventionally used to evaluate trackers by initializing them using the ground truth location in the first frame.…”
Section: Experimental Results On Otb50mentioning
confidence: 99%
“…For this benchmark data set, there are online available tracking results for 29 trackers [38]. In addition, we include the tracking results of additional 12 recent trackers, namely, MTMVTLS [31], MTMVTLAD [32], MSLA‐4 [35] (the recent version of ASLA [34]), SST [5], SMTMVT [47], CNT [48], TGPR [49], DSST [14], PCOM [50], KCF [45], MEEM [46], and RSST [36]. Following the protocol proposed in [38], we use the same parameters for SGLST_Color and SGLST_HOG on all the sequences to obtain the one‐pass evaluation (OPE) results, which are conventionally used to evaluate trackers by initialising them using the ground truth location in the first frame.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, it is used in head-pose classification [38,39], infer user's affective state [40], and video tracking problems [14,15]. And some recent works, such as [41,42,43] are also applied to the visual field.…”
Section: Multi-view Learningmentioning
confidence: 99%