2020
DOI: 10.1016/j.image.2020.115807
|View full text |Cite
|
Sign up to set email alerts
|

Multiple instance deep learning for weakly-supervised visual object tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…As a rule, m = l K l ≪ n, thus (16) gives us an opportunity of saving memory by storing (n × m) sparse matrix instead of full (n × n) co-association matrix.…”
Section: Co-association Matrix Of Cluster Ensemblementioning
confidence: 99%
See 1 more Smart Citation
“…As a rule, m = l K l ≪ n, thus (16) gives us an opportunity of saving memory by storing (n × m) sparse matrix instead of full (n × n) co-association matrix.…”
Section: Co-association Matrix Of Cluster Ensemblementioning
confidence: 99%
“…Methods based on the cluster assumption and the manifold assumption are also used [2,16]. In [12], the upper bounds of the labels' noise characteristics are obtained, and an algorithm for estimating the degree of noise is proposed using preliminary data partitioning into clusters.…”
Section: Introductionmentioning
confidence: 99%
“…These methods assumed for tracking a single object, and thus these are short to our problem. Several methods have been proposed for multi-object tracking [29,16]. Nwoye et al [29] proposed a weakly-supervised tracking of surgical tools appearing in endoscopic video.…”
Section: Related Workmentioning
confidence: 99%
“…The weak label in this case is a class label of the tool type, and they assumed that one tool appears for each tool type even though several different types of tools appear at a frame. Huang et al [16] tackled a similar problem of semantic object tracking. He et al [15] proposed an unsupervised trackingby-animation framework.…”
Section: Related Workmentioning
confidence: 99%