Proceedings of the ACM Symposium on Applied Perception 2016
DOI: 10.1145/2931002.2931011
|View full text |Cite
|
Sign up to set email alerts
|

User, metric, and computational evaluation of foveated rendering methods

Abstract: Figure 1: Left: Our foveated resolution method running on a commercial video game engine. Right: Our foveated resolution, ambient occlusion, tessellation, and ray-casting (respectively) methods. Areas outwith the circles are the peripheral regions rendered in lower detail.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(32 citation statements)
references
References 19 publications
0
32
0
Order By: Relevance
“…Sebastian et al [2015] employ a similar generic model to predict their data for complex images, while Bradley et al [2014] additionally consider local luminance adaptation to account for near eccentricity (up to 10 • ). The closest to our efforts is the work by Swafford et al [2016] which extends the advanced visible difference predictor HDR-VDP2 [Mantiuk et al 2011b] to handle arbitrary eccentricities by employing a cortex magnification factor to suppress the original CSF. The authors attempt to train their metric based on data obtained for three applications of foveated rendering, but they cannot find a single set of parameters that would fit the metric prediction to the data.…”
Section: Perceptual Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…Sebastian et al [2015] employ a similar generic model to predict their data for complex images, while Bradley et al [2014] additionally consider local luminance adaptation to account for near eccentricity (up to 10 • ). The closest to our efforts is the work by Swafford et al [2016] which extends the advanced visible difference predictor HDR-VDP2 [Mantiuk et al 2011b] to handle arbitrary eccentricities by employing a cortex magnification factor to suppress the original CSF. The authors attempt to train their metric based on data obtained for three applications of foveated rendering, but they cannot find a single set of parameters that would fit the metric prediction to the data.…”
Section: Perceptual Backgroundmentioning
confidence: 99%
“…Computational depth-of-field effects partially compensate for the lack of proper eye accommodation in standard displays [Mantiuk et al 2011a;Mauderer et al 2014], while for displays with accommodative cues, proper alignment of multi-focal images can be achieved [Mercier et al 2017] or laser beams can be guided by pupil tracking [Jang et al 2017]. The computation performance may be improved by reducing the level of detail [Duchowski et al 2009;Reddy 2001], or spatial image resolution [Guenter et al 2012;Patney et al 2016;Stengel et al 2016b;Swafford et al 2016;Vaidyanathan et al 2014] towards the periphery, which is particularly relevant for this work.…”
Section: Foveated Renderingmentioning
confidence: 99%
“…Similarly, conventional image quality metrics [Wang and Bovik 2006] rely on the assumption that the image resolution is uniform , thus they are not directly applicable to foveated rendering . Researchers proposed foveated image quality metrics, based on single [Floren and Bovik 2014] or multiple salient image features [Swafford et al 2016]. These metrics appear to be applicable to virtual reality content.…”
Section: Visual Complexity and Quality Of Gcdsmentioning
confidence: 99%
“…Foveation-based content Adaptive Structural Similarity Index (FA-SSIM) [Rimac-Drlje et al 2011] first weighs SSIM by a CSF that depends on frequency, eccentricity, and retinal velocity then averages these weighted coefficients. Swafford et. al [2016] extends HDR-VDP2 [Mantiuk et al 2011] with the eccentricity-dependent CSF and a cortical magnification term.…”
Section: Image and Video Error Metricsmentioning
confidence: 99%