2008
DOI: 10.1109/tip.2008.917218
|View full text |Cite
|
Sign up to set email alerts
|

GAFFE: A Gaze-Attentive Fixation Finding Engine

Abstract: The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
103
0
2

Year Published

2009
2009
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 159 publications
(105 citation statements)
references
References 40 publications
0
103
0
2
Order By: Relevance
“…We will consider three of these techniques: The most widely used is an algorithm developed by Levenshtein (1966), another is the widely used attentional map algorithm (AMAP; Ouerhani, von Wartburg, Hugli, & Muri, 2004;Rajashekar, Van der Linde, Bovik, & Cormack, 2008), and the third is a relatively recent vector-based algorithm developed by Jarodzka, Holmqvist, and Nyström (2010). Each of these algorithms compares two scanpaths and produces a number that indicates how similar they are to each other.…”
Section: Comparing Three Scanpath-comparison Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…We will consider three of these techniques: The most widely used is an algorithm developed by Levenshtein (1966), another is the widely used attentional map algorithm (AMAP; Ouerhani, von Wartburg, Hugli, & Muri, 2004;Rajashekar, Van der Linde, Bovik, & Cormack, 2008), and the third is a relatively recent vector-based algorithm developed by Jarodzka, Holmqvist, and Nyström (2010). Each of these algorithms compares two scanpaths and produces a number that indicates how similar they are to each other.…”
Section: Comparing Three Scanpath-comparison Algorithmsmentioning
confidence: 99%
“…When attempting to uncover exploration strategies that unfold over time, the loss of temporal information poses a serious problem. More recent AMAP comparison algorithms (e.g., Ouerhani et al, 2004;Rajashekar et al, 2008) create attention Blandscapes^by accumulating fixedwidth Gaussians over fixation points. It is generally accepted that the longer a fixation time on a particular item, the deeper the visual processing of that item (Just & Carpenter, 1976).…”
Section: Attention Map (Amap) Scanpath Comparisonmentioning
confidence: 99%
“…Indeed, our decomposition model can explain all variances discovered in seminal studies of human visual search (Noton, 1971;Treisman and Gormican, 1988). Others have already implemented architectures to mimic such search behavior (Itti et al, 1998;Privitera and Stark, 2000;Rajashekar et al, 2008), however those models merely extract straight contour orientations and can therefore explain only a small number of those findings. In contrast, the presented model explains a much larger number of findings such as the computation of precise contour curvature, contour angle and aperture of an arc ( (Treisman and Gormican, 1988), figure 5, 10 and 11 respectively).…”
Section: Further Comparison To Other Approachesmentioning
confidence: 92%
“…In the bottom-up approach, a computational model for detecting visual attention regions is constructed based on low-level features of visual signals. In this study, we employed two bottom-up attention models: Saliency model [4] and GAFFE model [6].…”
Section: Visual Attention Modelsmentioning
confidence: 99%
“…However, eye-tracking is also time-consuming and cannot be performed in real-time applications. Thus, some researchers have tried to detect attention regions and find eye fixations from a field of view using computable and automatic approaches based on low-level visual features [4] [6].…”
Section: Introductionmentioning
confidence: 99%