Proceedings of the 4th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa 2006
DOI: 10.1145/1108590.1108595
|View full text |Cite
|
Sign up to set email alerts
|

A GPU based saliency map for high-fidelity selective rendering

Abstract: The computation of high-fidelity images in real-time remains one of the key challenges for computer graphics. Recent work has shown that by understanding the human visual system, selective rendering may be used to render only those parts to which the human viewer is attending at high quality and the rest of the scene at a much lower quality. This can result in a significant reduction in computational time, without the viewer being aware of the quality difference. Selective rendering is guided by models of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
78
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 81 publications
(78 citation statements)
references
References 27 publications
(36 reference statements)
0
78
0
Order By: Relevance
“…For instance, [29] extracts view-dependent ridgevalley feature lines to form object illustration. On the other hand, saliency detection can be generalised to identify important 3D scene parts by processing model space motion and depth information as well as image space colour and luminance information [25]. A further application is realtime tracking of visually attended 3D scene models, where [24] performed this based on both image features (colour and luminance) and 3D model features (depth, model size, and model motion).…”
Section: Visual Saliencymentioning
confidence: 99%
“…For instance, [29] extracts view-dependent ridgevalley feature lines to form object illustration. On the other hand, saliency detection can be generalised to identify important 3D scene parts by processing model space motion and depth information as well as image space colour and luminance information [25]. A further application is realtime tracking of visually attended 3D scene models, where [24] performed this based on both image features (colour and luminance) and 3D model features (depth, model size, and model motion).…”
Section: Visual Saliencymentioning
confidence: 99%
“…For example, Yarbus [10] has shown that the way people look at pictures strongly depends on the task they have to achieve. Furthermore, the top-down component is subject to the habituation phenomenon [11], i.e. objects become familiar over time, and we become oblivious to them.…”
Section: Related Workmentioning
confidence: 99%
“…objects become familiar over time, and we become oblivious to them. Several models have been proposed to simulate the multiple top-down components using task-map [8], habituation [11], memory [12] as well as spatio-temporal contexts [6].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Used in angular sensitivity (James, 1890;Humphreys and Bruce, 1989) ( Yee et al, 2001;Cater et al, 2003;Mastoropoulou et al, 2005a;Mastoropoulou, 2006;Chalmers et al, 2006;Longhurst et al, 2006) inattentional blindness (Rock et al, 1992;Mack and Rock, 1998;Simons and Chabris, 1999) (Cater et al, 2003;Mastoropoulou et al, 2005a;Mastoropoulou, 2006) modality appropriateness hypothesis (Howard and Templeton, 1966;Welch and Warren, 1980) (Mastoropoulou and Chalmers, 2004;Mastoropoulou, 2006;Hulusic et al, 2009Hulusic et al, , 2010a auditory driving effect (Gebhard and Mowbray, 1959;Shipley, 1964;Wada et al, 2003;Recanzone, 2003) (Mastoropoulou and Chalmers, 2004;Mastoropoulou, 2006;Hulusic et al, 2009Hulusic et al, , 2010a temporal ventriloquism (Morein-Zamir et al, 2003;Bertelson and Aschersleben, 2003;Aschersleben and Bertelson, 2003;Burr et al, 2009) (Hulusic et al, 2009(Hulusic et al, , 2010a illusory flash induced by sound (Shams et al, 2000(Shams et al, , 2002) …”
Section: Phenomenonmentioning
confidence: 99%