2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.513
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Low- and High-Level Contributions to Fixation Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

5
213
1
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 260 publications
(220 citation statements)
references
References 34 publications
5
213
1
1
Order By: Relevance
“…However, it should be noted that another class of model based on learning within deep neural networks (DNNs) has recently been advanced as a competitor to traditional saliency models (Vig, Dorr, & Cox, 2014). For example, DeepGaze II, the current top performer in this class, learns where people attend in scenes from training sets of fixations over object features and then predicts fixations on new scenes (Kummerer, Wallis, Gatys, & Bethge, 2017).…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…However, it should be noted that another class of model based on learning within deep neural networks (DNNs) has recently been advanced as a competitor to traditional saliency models (Vig, Dorr, & Cox, 2014). For example, DeepGaze II, the current top performer in this class, learns where people attend in scenes from training sets of fixations over object features and then predicts fixations on new scenes (Kummerer, Wallis, Gatys, & Bethge, 2017).…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…However, while these models work relatively well for impoverished stimuli, human gaze behaviour towards richer scenes can be predicted at least as well by the locations of objects 16 and perceived meaning 9 . When sematic object properties are taken into account, their weight for gaze prediction far exceeds that of low-level attributes 8,17 . A common thread of low-and high-level salience models is that they interpret salience as a property of the image and treat inter-individual differences as unpredictable 7,18 , often using them as a 'noise ceiling' for model evaluations 18 .…”
Section: Introductionmentioning
confidence: 99%
“…Rather than simply remapping the color histogram or normalizing an image for nearby luminance, automatic mechanisms are thought to depend on factors such as co-linearity, co-planarity, junctions, feature grouping, and transparency issues such as smoke and rain (Zucker et al 1988;Anderson 1997;Adelson 2000). Biologically driven models are increasingly capable of explaining visual illusions (Blakeslee and McCourt 2004;Li 2011) and predicting gaze patterns based on saliency (Borji 2018;Kummerer et al 2017), but they are data-limited to SDR images and require HDR experimentation to extend their generalizability to real-world vision.…”
Section: Fig 1 A) Two Examples Of Hdr Luminance In Naturalistic Scenmentioning
confidence: 99%