2007
DOI: 10.1145/1239451.1239553
|View full text |Cite
|
Sign up to set email alerts
|

Computational time-lapse video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…Existing methods can be classified into two categories, i.e., dynamic representation and static representation. Dynamic representation generates a video sequence that is composed of a series of sub-clips extracted from one or multiple video sequences or generated from a collection of photos [11], [25], [31], whereas static representation generally generates one or multiple images from video key-frames to facilitate not only their viewing but also transmission and storage [4], [6], [8], [20], [29], [36]. Although video summarization can reduce the cost for video browsing, there is a risk of missing details and the possibly inaccurate summarization also may cause inconvenience in browsing.…”
Section: A Video Summarizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Existing methods can be classified into two categories, i.e., dynamic representation and static representation. Dynamic representation generates a video sequence that is composed of a series of sub-clips extracted from one or multiple video sequences or generated from a collection of photos [11], [25], [31], whereas static representation generally generates one or multiple images from video key-frames to facilitate not only their viewing but also transmission and storage [4], [6], [8], [20], [29], [36]. Although video summarization can reduce the cost for video browsing, there is a risk of missing details and the possibly inaccurate summarization also may cause inconvenience in browsing.…”
Section: A Video Summarizationmentioning
confidence: 99%
“…The produced transition is more natural than the traditional transitions such as fade-in, fadeout, wipes, and dissolve. 4 Since both image-level and sequencelevel matching for video pairs are available, we can accomplish a content-based continuous transition. The proposed content-based transition produces virtually consistent link for the final composition.…”
Section: An Overview Of the Schemementioning
confidence: 99%
“…Existing methods can be classified into two categories, i.e., dynamic representation and static representation. Dynamic representation generates a video sequence that is composed of a series of sub-clips extracted from one or multiple video sequences or generated from a collection of photos [10], [22], [30], whereas static representation generally generates one or multiple images from video key-frames to facilitate not only viewing but also transmission and storage [4], [7], [8], [17], [27], [35]. In comparison with the conventional video abstraction methods that mainly target at reducing the redundancy in video streams, our work focuses on the better transformation from movies to comics instead of redundancy reduction.…”
Section: Related Workmentioning
confidence: 99%
“…The users are asked to freely browse the comics generated by different paradigms. Then, they are asked to make a comparison 4 . When comparing two paradigms, each user can make a "better", "much better", and "comparable" choice.…”
Section: ) Comprehensive Comparison Of Different Paradigmsmentioning
confidence: 99%
“…The factorization allows a user to easily relight the scene, recover a portion of the scene geometry, and perform advanced image editing operations. Bennett and McMillan [13] proposed a non-uniform sampling algorithm, which optimizes the summarization of the input video-rate footage to match the desired duration and visual output characteristics. In this paper, we implement a real-time exposure control system benefiting from the high computationally efficiency of GPU.…”
Section: Video Post-processingmentioning
confidence: 99%