2015
DOI: 10.1016/j.visres.2015.04.007
|View full text |Cite
|
Sign up to set email alerts
|

Towards the quantitative evaluation of visual attention models

Abstract: Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
54
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(54 citation statements)
references
References 147 publications
0
54
0
Order By: Relevance
“…Current eye tracking experimentation represent indicators of saliency as the probability of fixations on certain 274 regions of an image 2 . Metrics used in saliency benchmarks [40] consider all fixations during viewing time with same 275 importance, making saliency hypotheses unclear of which computational procedures perform best using real image 276 datasets. Previous psychophysical studies [16,17] revealed that fixations guided by bottom-up attention are influenced 277 by the type of features that appear in the scene and their relative feature contrast.…”
Section: Model Evaluation 273mentioning
confidence: 99%
See 2 more Smart Citations
“…Current eye tracking experimentation represent indicators of saliency as the probability of fixations on certain 274 regions of an image 2 . Metrics used in saliency benchmarks [40] consider all fixations during viewing time with same 275 importance, making saliency hypotheses unclear of which computational procedures perform best using real image 276 datasets. Previous psychophysical studies [16,17] revealed that fixations guided by bottom-up attention are influenced 277 by the type of features that appear in the scene and their relative feature contrast.…”
Section: Model Evaluation 273mentioning
confidence: 99%
“…39 Therefore, modelization of attention should consider as well the influences of task and many other top-down effects. 40 41 Initial hypotheses by Li [24,25] suggested that visual saliency is processed by the lateral interactions of V1 cells. 42 In their work, pyramidal cells and interneurons in the primary visual cortex (V1, Brodmann Area 17 or striate cortex) 43 and their horizontal intracortical connections are seen to modulate activity in V1.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This is why, in order to compare the saliency models inside each category, some characteristics have been added into the descriptive sheets. However, as explained in [44], the diversity of models makes taxonomy and comparison in the field of visual attention particularly difficult. The purpose of this section is to provide readers with a global view of each model characteristic.…”
Section: Conclusion: a Taxonomy Of The Algorithmsmentioning
confidence: 99%
“…This architecture has been the main inspiration for current saliency models [62,43], that alternatively use distinct mechanisms (accounting for different levels of processing, context or tuning depending on the scene) but preserving same or similar structure for these steps. Although current state-of-the-art models precisely resemble eye-tracking fixation data [6,9], we question if these models represent saliency. We will test this hypothesis with a novel synthetic image dataset.…”
Section: Introductionmentioning
confidence: 98%