2016
DOI: 10.1109/tbdata.2016.2530714
|View full text |Cite
|
Sign up to set email alerts
|

Partial Copy Detection in Videos: A Benchmark and an Evaluation of Popular Methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
50
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 65 publications
(57 citation statements)
references
References 27 publications
1
50
0
Order By: Relevance
“…Tan et al [32] proposed a graph-based Temporal Network (TN) structure generated through keypoint frame matching, which is used for the detection of the longest shared path between two compared videos. Several recent works have employed modifications of this approach for the problem of partial-copy detection, combining it with global CNN features [17] and a CNN+RNN architecture [14]. Additionally, other approaches employ Temporal Hough Voting [8,16] to align matched frames by means of a temporal Hough transform.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Tan et al [32] proposed a graph-based Temporal Network (TN) structure generated through keypoint frame matching, which is used for the detection of the longest shared path between two compared videos. Several recent works have employed modifications of this approach for the problem of partial-copy detection, combining it with global CNN features [17] and a CNN+RNN architecture [14]. Additionally, other approaches employ Temporal Hough Voting [8,16] to align matched frames by means of a temporal Hough transform.…”
Section: Related Workmentioning
confidence: 99%
“…However, this disregards the spatial and the temporal structure of the visual similarity, as aggregation of features is influenced by clutter and irrelevant content. Other approaches attempt to take into account the temporal sequence of frames in the similarity computation, e.g., by using Dynamic Programming [7,24], Temporal Networks [32,17] and Temporal Hough Voting [8,16]. Another line of research considers spatio-temporal video representation and matching based on Recurrent Neural Networks (RNN) [10,14] or in the Fourier domain [28,26,2].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the case of frame-level matching approaches, the near-duplicate videos are determined by the comparison between individual video frames or sequences. Typical frame-level approaches [48,10,32,22,54] calculate the frame-by-frame similarity and then employ sequence alignment algorithms to compute similarity at the video level. Moreover, a lot of research effort has been invested in methods that exploit spatio-temporal features to represent video segments in order to facilitate videolevel similarity computation [15,56,38,37,3].…”
Section: Frame-level Matchingmentioning
confidence: 99%
“…What should the user do if he/she wants to find when such an action also happens in that cartoon? Simply finding exactly the same content using copy detection methods [12] would fail for most cases, as the content variations across videos are of great differences. As shown in the middle video of Fig.…”
Section: Introductionmentioning
confidence: 99%