2017
DOI: 10.1007/s11042-016-4300-7
|View full text |Cite
|
Sign up to set email alerts
|

VISCOM: A robust video summarization approach using color co-occurrence matrices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(3 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…Note that, within this process, labels were not necessary; thus, the DNS approach may operate completely unsupervised. In [22], a unique video-summarizing technique called VISCOM was introduced, which is based on the color occurrence matrices from the video, which were then utilized to characterize each video frame. Then, from the most informative frames of the original video, a summary was created.…”
Section: Related Workmentioning
confidence: 99%
“…Note that, within this process, labels were not necessary; thus, the DNS approach may operate completely unsupervised. In [22], a unique video-summarizing technique called VISCOM was introduced, which is based on the color occurrence matrices from the video, which were then utilized to characterize each video frame. Then, from the most informative frames of the original video, a summary was created.…”
Section: Related Workmentioning
confidence: 99%
“…To address these problems, shot boundary detection [10], [11], [12] and meta-data, such as audio signal [13], [14], transcripts [15], [16] approaches are proposed to summarize lecture videos. Subudhi et al [10] compared frames histograms to detect shot transition and calculate edge function based on three contrast features to estimate content and non-content frames.…”
Section: Related Workmentioning
confidence: 99%
“…Mohanta et al [11] utilized local and global features to detect and classify shot boundary and then a multilayer perceptron network was applied to detect no change, abrupt change and gradual change frames. Cirne et al [12] described a video frame by means of color co-occurrence matrices, then normalized sum of squared differences was used to detect shot boundary. He et al [14] exploited spoken text, pitch and pause information in audio signal to show that these changes under various speaking conditions.…”
Section: Related Workmentioning
confidence: 99%