2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7952432
|View full text |Cite
|
Sign up to set email alerts
|

Summarization of human activity videos via low-rank approximation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…However, the problems of dictionary-of-representatives approaches are especially pronounced when summarizing activity videos, due to their characteristic properties: static camera, static background, heavy inter-frame visual redundancy and lack of editing cuts. This paper, which integrates and extends preliminary work [30] [31], introduces a framework for activity video summarization that attempts to overcome the above issues. Its contributions are four-fold.…”
Section: Introductionmentioning
confidence: 87%
See 2 more Smart Citations
“…However, the problems of dictionary-of-representatives approaches are especially pronounced when summarizing activity videos, due to their characteristic properties: static camera, static background, heavy inter-frame visual redundancy and lack of editing cuts. This paper, which integrates and extends preliminary work [30] [31], introduces a framework for activity video summarization that attempts to overcome the above issues. Its contributions are four-fold.…”
Section: Introductionmentioning
confidence: 87%
“…During empirical evaluation of this work, the special characteristics of activity videos were exploited so as to avoid the subjectivity outlined above. Temporal video segmentation ground-truth annotation data, describing obvious temporal boundaries between consecutive activity video segments, were employed for evaluating the proposed methods as objectively as possible, similarly to [30]. Given a summary s of an input video D, the number I s of extracted key-frames derived from actually different activity segments (hereafter called independent keyframes) is used as an indirect indication of summarization success.…”
Section: Evaluation Metricmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, unexpected/unplanned events might occur during live coverage (or even expected events at unknown time-instances), including e.g., leader break-away, crashes, falls, accidents. A number of these events may be automatically detected visually, by on-the-fly activity recognition [37], [38], [39], [40], [41], [42], or activity video summarization [43], [44], [45], [46], [47], [48], [49] systems. Alternatively, they can be manually annotated by a director that overviews the coverage.…”
Section: Opportunistic Shooting Exploiting Multiple-uav Cinematographymentioning
confidence: 99%
“…In the broad research area of dance summarization, algorithms focusing on extracting key frames of human actions can be also considered. More specifically, the works of [45] and [46] introduce a classification framework for retrieving representative human actions, while the work of [47] proposes a hierarchical union of sub-spaces for human activity abstraction under a semi-supervised framework. In addition, the work of [48] proposes Histograms of Grassmannian Points for classifying multidimensional time-evolving data in dynamic scenes.…”
Section: Introductionmentioning
confidence: 99%