2016
DOI: 10.1016/j.neucom.2016.03.083
|View full text |Cite
|
Sign up to set email alerts
|

Spatial and temporal scoring for egocentric video summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(3 citation statements)
references
References 53 publications
(99 reference statements)
0
3
0
Order By: Relevance
“…Indeed, they extracted region cues representing high-level of saliency in egocentric video, and then, they applied a regression method to predict the relative importance of any new region based on these cues. Guo et al [78] proposed a method that focuses on extracting video shots which reflect high stable salience, discrimination and representativeness in order to generate compact storyboard summary [79]. Lu et al, inspired from a work about studying links of news articles over time, defined a random walk-based metric that captures event connectivity beyond simple object co-occurrence, to provide a better sense of story [80].…”
Section: Storytellingmentioning
confidence: 99%
“…Indeed, they extracted region cues representing high-level of saliency in egocentric video, and then, they applied a regression method to predict the relative importance of any new region based on these cues. Guo et al [78] proposed a method that focuses on extracting video shots which reflect high stable salience, discrimination and representativeness in order to generate compact storyboard summary [79]. Lu et al, inspired from a work about studying links of news articles over time, defined a random walk-based metric that captures event connectivity beyond simple object co-occurrence, to provide a better sense of story [80].…”
Section: Storytellingmentioning
confidence: 99%
“…Moreover, image signature is applied for foreground object detection and then fused with motion information to summarize egocentric video in [ 40 ]. A modularity cut algorithm is employed in [ 41 ] to track objects and use this information for summary generation.…”
Section: Related Workmentioning
confidence: 99%
“…Many existing video summarisation approaches [1, 4, 812] only focus on satisfying the visual interestingness of viewers. Some of these approaches based on one feature, such as visual attention [4, 10] and colour feature [1]. Other approaches [8, 9, 11, 12] fuse various visual features to extract important segments for summaries.…”
Section: Introductionmentioning
confidence: 99%