2016 IEEE International Conference on Image Processing (ICIP) 2016
DOI: 10.1109/icip.2016.7532342
|View full text |Cite
|
Sign up to set email alerts
|

Real-time video summarization on mobile

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…Rushes videos are summarized in an online fashion in [170], by constructing a decision tree and evaluating the cumulative frame importance over a fixed time period. On the other hand, generic proposals in this category involve creating an online dictionary [106,201] to discriminate among redundant and unique segments. Optimization based techniques are suggested in [35] for identifying the representative segments in an iterative way.…”
Section: Onlinementioning
confidence: 99%
See 1 more Smart Citation
“…Rushes videos are summarized in an online fashion in [170], by constructing a decision tree and evaluating the cumulative frame importance over a fixed time period. On the other hand, generic proposals in this category involve creating an online dictionary [106,201] to discriminate among redundant and unique segments. Optimization based techniques are suggested in [35] for identifying the representative segments in an iterative way.…”
Section: Onlinementioning
confidence: 99%
“…Selection of group of frames for a summary is done based on setting a threshold for the distance. A real time video summarization on mobile devices is suggested in [106]. Video is segmented with the help of motion as in [53].…”
Section: Onlinementioning
confidence: 99%
“…Video summarization. The goal of summarization techniques [10,11,12] is to generate a shorter version of the video keeping the essential information by either creating a static storyboard or still-image abstract, where some selected frames resume the relevant video content [13], or a dynamic video skimming or moving-image abstract, where selected clips from the original stream are collated to compose the output video [14,15]. Molino et al .…”
Section: Related Workmentioning
confidence: 99%
“…Examples of these methodologies are the use of GIST difference over a given window [7]; similarity between the R-CNN hashes extracted from the fixation region [132]; the Cumulative Displacement Curves in [100]; a Super Vector Machine-Hidden Markov Model pipeline in [126]; a 3D Deep Convolutional Neural Network in [102]; and KTS by Potapov et al [103], a kernelbased change point detection algorithm. Recently, many authors chose to segment the video deterministically, set to a specific number of frames or time [48,54,80,86,93,99,108,109,130,131,145].…”
Section: Event Segmentationmentioning
confidence: 99%
“…Alternatively, the presence of factors such as saliency, edges and colorfulness, object interactions, and the presence of landmarks, people or faces can be used to predict each frames' interestingness [47]. Similarly, [7,62,86,129] base their quality or intentionality prediction on cues for composition such as picture alignment or accelerometer data; the artistic rule-of-thirds; symmetry (on local SIFT features) and color vibrancy; head tilt; camera motion; etc. Context dependent, Okamoto et al [93] train a model for navigation instructions using a crosswalk detector and ego-motion cues.…”
Section: Supervised Learningmentioning
confidence: 99%