1999
DOI: 10.1007/s005300050115
|View full text |Cite
|
Sign up to set email alerts
|

A feature-based algorithm for detecting and classifying production effects

Abstract: We describe a new approach to the detection and classication of scene breaks in video sequences. Our method can detect and classify a variety of scene breaks, including cuts, fades, dissolves and wipes, even in sequences involving signi cant motion. We detect the appearance of intensity edges that are distant from edges in the previous frame. A global motion computation is used to handle camera or object motion. The algorithm we propose withstands JPEG and MPEG artifacts, even at very high compression rates. E… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
86
0
7

Year Published

2000
2000
2008
2008

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 206 publications
(94 citation statements)
references
References 12 publications
1
86
0
7
Order By: Relevance
“…Recall Precision Standard Deviation of Pixel Intensities [2] 0.765 0.484 ECR [1] 0.654 0.125 EAG [3] 0 …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recall Precision Standard Deviation of Pixel Intensities [2] 0.765 0.484 ECR [1] 0.654 0.125 EAG [3] 0 …”
Section: Methodsmentioning
confidence: 99%
“…In [1] transition detection is based on the analysis of intensity edges. An edge pixel that appears far from an existing edge pixel is defined as an entering pixel, while a previously existing edge pixel that disappears is defined as an exiting pixel.…”
Section: Introductionmentioning
confidence: 99%
“…Much work exists in shot-cut detection [2,3]. Popular techniques that use image features [7], optical flow [14] etc., are not applicable due to the heavy noise that is common in broadcast videos. In such cases, a histogram based descriptor is well suited [4].…”
Section: Visual Domain Processing Of Videosmentioning
confidence: 99%
“…Previous work that segments a video into scenes [1,5] using visual features [6,7] or scene dynamism [8], fail in many cases where there is no significant visual change across space and time. This is especially true for the class of sports videos.…”
Section: Introductionmentioning
confidence: 99%
“…Automatic temporal video segmentation methods usually involve computing pixel-level and/or histogram-based difference measures for each pair of successive frames in the video stream and then using shot boundary detection techniques to locate the positions of shot boundaries [2], [3]. More advanced temporal segmentation techniques use sophisticated image features such as edges [4], focus of expansion points [5] and image motion [6]. However, just as a phoneme can appear in many different words, visually similar video shots can appear in different video segments with different semantic meanings.…”
Section: Introductionmentioning
confidence: 99%