2011
DOI: 10.1108/14684521111161981
|View full text |Cite
|
Sign up to set email alerts
|

Video summarisation based on collaborative temporal tags

Abstract: Purpose -Video summarisation is one of the most active fields in content-based video retrieval research. A new video summarisation scheme is proposed by this paper based on socially generated temporal tags. Design/methodology/approach -To capture users' collaborative tagging activities the proposed scheme maintains video bookmarks, which contain some temporal or positional information about videos, such as relative time codes or byte offsets. For each video all the video bookmarks collected from users are then… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2014
2014

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 28 publications
(41 reference statements)
0
3
0
Order By: Relevance
“…They suggested that the proposed method resolved the problem of low coverage of tags faced in previous studies, and users were motivated to add tags by using the tag sharing and tag cloud methods. Having statistically analyzing video bookmarks to form video storyboards, Chung, Wang, and Sheu () extracted some meaningful key frames. In doing so, they assumed that the video frames around bookmarks added by users were sufficiently representative for video summarization.…”
Section: Literature Reviewmentioning
confidence: 99%
“…They suggested that the proposed method resolved the problem of low coverage of tags faced in previous studies, and users were motivated to add tags by using the tag sharing and tag cloud methods. Having statistically analyzing video bookmarks to form video storyboards, Chung, Wang, and Sheu () extracted some meaningful key frames. In doing so, they assumed that the video frames around bookmarks added by users were sufficiently representative for video summarization.…”
Section: Literature Reviewmentioning
confidence: 99%
“…They confirmed that the coverage of annotations generated by the Synvie method was higher than that of existing methods, and users were motivated to add tags by using tag‐sharing and tag‐cloud methods. After statistically analyzing video bookmarks so as to form video storyboards, Chung, Wang, and Sheu (2011) extracted some meaningful key frames. In doing so, they assumed that the video frames around bookmarks added by users were sufficiently representative for video summarization.…”
Section: Related Studiesmentioning
confidence: 99%
“…Extraction. After statistically analyzing video bookmarks so as to form video storyboards, Chung et al (2011) extract some meaningful key frames; in doing so, they assume that the video frames around bookmarks added by users are representative of and informative enough for video summarization. Their proposed method produced semantically more important summaries than two existing methods that utilize low‐level audio‐visual features.…”
Section: Related Studiesmentioning
confidence: 99%