2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) 2014
DOI: 10.1109/icmew.2014.6890642
|View full text |Cite
|
Sign up to set email alerts
|

A novel user-centered design for personalized video summarization

Abstract: In the past, several automatic video summarization systems had been proposed to generate video summary. However, a generic video summary that is generated based only on audio, visual and textual saliencies will not satisfy every user. This paper proposes a novel system for generating semantically meaningful personalized video summaries, which are tailored to the individual user's preferences over video semantics. Each video shot is represented using a semantic multinomial which is a vector of posterior semanti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(23 citation statements)
references
References 12 publications
(17 reference statements)
0
5
0
Order By: Relevance
“…Several concepts identified in the video subtitles and video frames could refer to the same concept (same lemma), but have different forms. Thus, to better align the concepts we first run part-of-speech (POS) tagging 7 on the concepts from the video subtitles and from the video frames and then we extract their lemmas 8 . This resulted in an average of 36 entities (minimum 0, maximum 58) per video and an average of 175 labels (minimum 34, maximum 524) per video.…”
Section: Explanation Generation Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…Several concepts identified in the video subtitles and video frames could refer to the same concept (same lemma), but have different forms. Thus, to better align the concepts we first run part-of-speech (POS) tagging 7 on the concepts from the video subtitles and from the video frames and then we extract their lemmas 8 . This resulted in an average of 36 entities (minimum 0, maximum 58) per video and an average of 175 labels (minimum 34, maximum 524) per video.…”
Section: Explanation Generation Methodologymentioning
confidence: 99%
“…Video summary personalization focuses on generating video summaries given a user's query [24,25,28]. Ghinea et al [8] proposed a summarization algorithm that creates a user profile, i.e., the user sees a set of 25 concepts present in the video and indicates through a list or a sliding window which or how much of these concepts should be included in the summary. Jin et al [14] integrated fastforward functionality, allowing users to skip parts of the summaries, while Chongtay et al [3] introduced the idea of responsive news summarization, i.e., automatically creating news summaries of different lengths, while providing access to the full news item.…”
Section: Personalization Of Video Summariesmentioning
confidence: 99%
“…High-level video features are used as preferences to the users for personalized video summarization [15][16][17][18][19][20]. IBM research has proposed a personalized video summarization system for pervasive mobile devices such as PDA [15].…”
Section: Personalized Video Summarizationmentioning
confidence: 99%
“…Same as their method the proposed summarization also uses constrained optimization for selecting shots and scenes that are relevant to individual us- 6 er's preferences. Our previous work in [19] proposed a personalized video summarization methodology for summarizing a video based on users' preferences on a set of semantic concepts.…”
Section: Personalized Video Summarizationmentioning
confidence: 99%
See 1 more Smart Citation