2017
DOI: 10.1007/978-3-319-54190-7_10
|View full text |Cite
|
Sign up to set email alerts
|

Pano2Vid: Automatic Cinematography for Watching 360 $$^{\circ }$$  Videos

Abstract: We introduce the novel task of Pano2Vid -automatic cinematography in panoramic 360 • videos. Given a 360 • video, the goal is to direct an imaginary camera to virtually capture natural-looking normal field-of-view (NFOV) video. By selecting "where to look" within the panorama at each time step, Pano2Vid aims to free both the videographer and the end viewer from the task of determining what to watch. Towards this goal, we first compile a dataset of 360 • videos downloaded from the web, together with human-edite… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
167
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 81 publications
(169 citation statements)
references
References 40 publications
2
167
0
Order By: Relevance
“…Several methods have been recently developed for navigating 360 • videos by finding NFoV of interest. These methods mainly leverage visual information such as saliency [47], [48], [49], [50], [51]. However, we tackle this problem from the perspective of audio sources.…”
Section: Video Applicationsmentioning
confidence: 99%
“…Several methods have been recently developed for navigating 360 • videos by finding NFoV of interest. These methods mainly leverage visual information such as saliency [47], [48], [49], [50], [51]. However, we tackle this problem from the perspective of audio sources.…”
Section: Video Applicationsmentioning
confidence: 99%
“…After that, when the buffer is not empty, it is very important to predict the FoV accurately [41]- [43]. Existing FoV estimation methods can be roughly classified into three categories, i.e., data driven approaches [44], probability model based approaches [35] [45] and motion saliency detection based approaches [46]. Although data driven approaches and motion saliency detection based approaches achieve good performance, the viewport movement depends only on the subjective will of a user, and it can never be predicted accurately.…”
Section: B Fov Switching and Tile Priority Modelmentioning
confidence: 99%
“…The Pano2Vid work by Su et al [2016] performs a related, but different task of automatically producing a narrow field-of-view video from a 360 • video. A recent follow-up work extended this to optimize for zoom as well [Su and Grauman 2017].…”
Section: Related Workmentioning
confidence: 99%
“…Our method can also be easily integrated with automatically generated constraints. Automatic saliency methods for 360 • video (e.g., [Su et al 2016]) determine the most visually salient regions, which can be directly interpreted as positive constraints. Alternately, as we are estimating 3D translation direction in Section 3.2, we can use this to keep the camera pointed roughly in the forward motion direction, which provides a comfortable "first-person" viewing experience.…”
Section: Source Of Constraintsmentioning
confidence: 99%
See 1 more Smart Citation