2019
DOI: 10.1007/978-3-030-11009-3_9
|View full text |Cite
|
Sign up to set email alerts
|

Learning a Robust Society of Tracking Parts Using Co-occurrence Constraints

Abstract: Object tracking is an essential problem in computer vision that has been researched for several decades. One of the main challenges in tracking is to adapt to object appearance changes over time and avoiding drifting to background clutter. We address this challenge by proposing a deep neural network composed of different parts, which functions as a society of tracking parts. They work in conjunction according to a certain policy and learn from each other in a robust manner, using cooccurrence constraints that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 42 publications
0
8
0
Order By: Relevance
“…VOT‐2017 dataset : In the more important and difficult benchmark, VOT‐2017 [25], several trackers are compared with our tracker, these trackers include the very top state‐of‐the‐art methods: STP [42], CFWCR [43], convolutional features for correlation filters (CFCF) [57], ECO [45], CCOT [44] and other recent methods: RCPF [58], unified convolutional tracker (UCT) [59], SPCT [60], SiamFC [33], Staple [1] and DPT [49].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…VOT‐2017 dataset : In the more important and difficult benchmark, VOT‐2017 [25], several trackers are compared with our tracker, these trackers include the very top state‐of‐the‐art methods: STP [42], CFWCR [43], convolutional features for correlation filters (CFCF) [57], ECO [45], CCOT [44] and other recent methods: RCPF [58], unified convolutional tracker (UCT) [59], SPCT [60], SiamFC [33], Staple [1] and DPT [49].…”
Section: Methodsmentioning
confidence: 99%
“…VOT-2016 dataset: We compare our tracker with 22 state-ofthe-art trackers on the VOT-2016 benchmark including society of tracking parts (STP) [42], CFWCR [43], ECO [45], CCOT [44], tree-structured convolutional neural network (TCNN) [48], SSAT [24], DPT [49], SiamFC [33], deepMKCF [50], new scale adaptive and multiple feature (NSAMF) [51], colour-aware complex cell tracker (CCCT) [52], structure output deep learning tracker (SO-DLT) [31], HCF [20], DAT [53], scale adaptive mean-shift (ASMS) [54], KCF [4], SAMF [51], DSST [3], tracking with Gaussian processes regression (TGPR) [55], multiple instance learning (MIL) [16], structured output tracking with kernels (STRUCK) [17] and incremental learning for visual tracking (IVT) [56]. Table 2 shows the results of our tracker and other trackers.…”
Section: State-of-the-art Comparisonmentioning
confidence: 99%
“…Using the orientation and magnitude of extracted tracklets, one-dimensional descriptors were derived and fed into one-class support vector machine (SVM) classifier for abnormality detection. Recently, Burceanu and Leordeanu [9] proposed a neural network object tracker with two pathways; the FilterParts and the ConvNetPart. The first pathway is robust to background noises while the second one is robust to object appearance changes over time.…”
Section: Related Workmentioning
confidence: 99%
“…• Robust target representation: Providing a powerful target representation is the main advantage of employing CNNs for visual tracking. To achieve the goal of learning generic representations for target modeling and constructing a more robust target models, the main contributions of methods are classified into: i) offline training of CNNs on large-scale datasets for visual tracking [63], [68], [80], [89], [97], [100], [101], [104], [112], [116], [135], [137], [142], [144], [153], [165], [168], [169], [173], ii) designing specific deep convolutional networks instead of employing pre-trained models [63], [68], [70], [72], [73], [75], [76], [80], [82], [89], [97], [100], [101], [104], [105], [108], [112], [116], [127], [135], [137], [141], [142], [144], [146], [150],…”
Section: Convolutional Neural Network (Cnn)mentioning
confidence: 99%