2017 IEEE International Conference on Multimedia &Amp; Expo Workshops (ICMEW) 2017
DOI: 10.1109/icmew.2017.8026231
|View full text |Cite
|
Sign up to set email alerts
|

A simple method to obtain visual attention data in head mounted virtual reality

Abstract: Automatic prediction of salient regions in images is a well developed topic in the field of computer vision. Yet, virtual reality omnidirectional visual content brings new challenges to this topic, due to a different representation of visual information and additional degrees of freedom available to viewers. Having a model for visual attention is important to continue research in this direction. In this paper we develop such a model for head direction trajectories. The method consists of three basic steps: Fir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 56 publications
(24 citation statements)
references
References 17 publications
0
23
0
Order By: Relevance
“…The dataset includes data collected from 59 users watching five 360 • videos on an HMD. In [Upenik and Ebrahimi 2017] a simple approach to treat raw experimental head direction trajectories in omnidirectional content to obtain visual attention maps was proposed. The authors of collected viewport data of 32 participants for 21 360 • images and proposed a new method, fused saliency maps, to transform the gathered data into saliency maps.…”
Section: Related Workmentioning
confidence: 99%
“…The dataset includes data collected from 59 users watching five 360 • videos on an HMD. In [Upenik and Ebrahimi 2017] a simple approach to treat raw experimental head direction trajectories in omnidirectional content to obtain visual attention maps was proposed. The authors of collected viewport data of 32 participants for 21 360 • images and proposed a new method, fused saliency maps, to transform the gathered data into saliency maps.…”
Section: Related Workmentioning
confidence: 99%
“…While eye movements analysis has proved to provide an important added value to visual attention modeling in VR [15], gaze data is not always easily accessible. Thus, head movements could be considered as a valuable proxy [23] [21]. Studies have been presented analyzing head movements during 360 • images exploration [4].…”
Section: Introductionmentioning
confidence: 99%
“…In the last few years, many studies have appeared collecting and analysing the navigation patterns of users watching VR content [6,8,[10][11][12][13][14][15][16]. Most studies build content-dependent saliency maps as main outcome of their analysis, which compute the most probable region of the sphere attended by the viewers, based on their head or eye movements [6,10,[17][18][19]. Some studies also provide additional quantitative analysis based on metrics, such as the average angular velocity, frequency of fixation, and mean exploration angles [8,13].…”
Section: Introductionmentioning
confidence: 99%