Proceedings of the 2nd ACM TRECVid Video Summarization Workshop 2008
DOI: 10.1145/1463563.1463569
|View full text |Cite
|
Sign up to set email alerts
|

The COST292 experimental framework for rushes summarization task in TRECVID 2008

Abstract: In this paper, the method used for Rushes Summarization task by the COST 292 consortium is reported. The approach proposed this year differs significantly from the one proposed in the previous years because of the introduction of new processing steps, like repetition detection in scenes. The method starts with junk frames removal and follows with clustering and scene detection; then for each scene, repetitions are detected in order to extract once the real scene; the following step consists in face detections … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 4 publications
0
9
0
Order By: Relevance
“…In the algorithm, faces are the primary targets, as they constitute the focus of most consumer video programs. Naci et al [277] extract features using face detection, camera motion, and MPEG-7 color layout descriptors of each frame. A clustering algorithm is employed to find and then remove repeated shots.…”
Section: ) Redundancy Removalmentioning
confidence: 99%
“…In the algorithm, faces are the primary targets, as they constitute the focus of most consumer video programs. Naci et al [277] extract features using face detection, camera motion, and MPEG-7 color layout descriptors of each frame. A clustering algorithm is employed to find and then remove repeated shots.…”
Section: ) Redundancy Removalmentioning
confidence: 99%
“…Indexing of video shots according to the associated textual information is realized following the approach of [27]. The audio information is processed off line with the application of automatic speech recognition (ASR) on the initial video source, so that specific sets of keywords can be assigned to each shot.…”
Section: Keyword Indexingmentioning
confidence: 99%
“…The visual similarity shot indexing is realized with the extraction of low level visual descriptors following the approach of [27]. In this case, five MPEG-7 descriptors, namely, color layout, color structure, scalable color, edge histogram, homogeneous texture, are generated from the representative keyframe of each video shot and stored in a relational database.…”
Section: Visual Similarity Indexingmentioning
confidence: 99%
See 1 more Smart Citation
“…The COST Action 292 Group is a large consortium of European research partners from the Netherlands, UK, France, Italy and Spain and extended their 2007 summarization participation by developing new approaches to detecting repetition [18]. As with most other groups, junk frames were explicitly detected and removed and the unit of manipulation was the scene rather than the shot or frame.…”
Section: Participants and Their Approachesmentioning
confidence: 99%