2015
DOI: 10.1109/tmm.2015.2443558
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Video Summarization Using Bipartite Matching Constrained Optimum-Path Forest Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
51
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(51 citation statements)
references
References 36 publications
0
51
0
Order By: Relevance
“…To address the challenges encountered in multi-view settings, there have been some specifically designed approaches that use random walk over spatio-temporal graphs [14] and rough sets [33] to summarize multi-view videos. A recent work in [28] uses bipartite matching constrained optimum path forest clustering to solve the problem of multi-view video summarization. An online method can also be found in [43].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…To address the challenges encountered in multi-view settings, there have been some specifically designed approaches that use random walk over spatio-temporal graphs [14] and rough sets [33] to summarize multi-view videos. A recent work in [28] uses bipartite matching constrained optimum path forest clustering to solve the problem of multi-view video summarization. An online method can also be found in [43].…”
Section: Related Workmentioning
confidence: 99%
“…To provide an objective comparison, we compare all the approaches using three quantitative measures, including Precision, Recall and F-measure ( 2×Precision×Recall Precision+Recall ) [14], [28]. For all these metrics, the higher value indicates better summarization quality.…”
Section: B Performance Measuresmentioning
confidence: 99%
See 1 more Smart Citation
“…The result of the proposed method is an optimal video summary that maintain the diversity of the original video. Kuanar et al (2015) proposed a video summarization approach using a graph theoretic method. The steps can be summarized: Shot boundary detection is achieved depending on Bag of Visual Words and the global feature such as color, texture and shape to remove the redundant frames.…”
Section: Introductionmentioning
confidence: 99%
“…The steps can be summarized: Shot boundary detection is achieved depending on Bag of Visual Words and the global feature such as color, texture and shape to remove the redundant frames. The video summary is constructed using Gaussian entropy algorithm [18]. Sigari et al (2015) proposed a fast video summarization using an ondemand feature extraction and a fuzzy inference system.…”
Section: Introductionmentioning
confidence: 99%