2007
DOI: 10.1007/978-3-540-77051-0_11
|View full text |Cite
|
Sign up to set email alerts
|

Video Summarisation for Surveillance and News Domain

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…Having a fixed constant cluster count is also not a good idea. Existing techniques [9], [13], and [14] based on eigengap will show similar poor performance. As shown in Table II, [3] does not work well when the scene is gradually changing although the performance is reasonable for cut transition as shown in Table III.…”
Section: A Quantitative Evaluation Of Cluster Detectionmentioning
confidence: 96%
See 2 more Smart Citations
“…Having a fixed constant cluster count is also not a good idea. Existing techniques [9], [13], and [14] based on eigengap will show similar poor performance. As shown in Table II, [3] does not work well when the scene is gradually changing although the performance is reasonable for cut transition as shown in Table III.…”
Section: A Quantitative Evaluation Of Cluster Detectionmentioning
confidence: 96%
“…Since normalized cuts [7] require prior information about the number of clusters and since the number of clusters is not fixed, we use eigengap [12] to automatically detect the number of clusters. Comparing with [7] is thus equivalent to comparing with [9], [10], and [13].…”
Section: A Quantitative Evaluation Of Cluster Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…First, we compare proposed ATS algorithm with Spectral clustering (SC) and Hierarchical Agglomerative clustering (HAC) methods in the scene detection task using news and surveillance videos. Both methods have been successfully used for summarisation of video content and they were implemented in a same fashion as in [5] and [17]. The obtained results are compared with a manually created ground truth of semantic events appeared in videos.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…For example, Shipman et al [2007] analyse low-level audio features to identify applause, cheering, excited speech, normal speech and music to plot a level of importance curve to represent the summarised content of the video. Damnjanovic et al [2007] segment video by analysing the image stream. Initially scenes are identified by means of shot change detection, and then the level of motion activity measured and equated with level of importance.…”
Section: Introductionmentioning
confidence: 99%