2018
DOI: 10.1007/978-3-030-01270-0_47
|View full text |Cite
|
Sign up to set email alerts
|

Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics

Abstract: Dozens of new models on fixation prediction are published every year and compared on open benchmarks such as MIT300 and LSUN. However, progress in the field can be difficult to judge because models are compared using a variety of inconsistent metrics. Here we show that no single saliency map can perform well under all metrics. Instead, we propose a principled approach to solve the benchmarking problem by separating the notions of saliency models, maps and metrics. Inspired by Bayesian decision theory, we defin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
79
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 84 publications
(79 citation statements)
references
References 51 publications
0
79
0
Order By: Relevance
“…Active research is ongoing to understand the pros and cons of the saliency measures (e.g. [50]). Many of the current saliency methods compete closely with one another at the top of the existing benchmarks and performances vary in a narrow band (See Figures 3 & 4).…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…Active research is ongoing to understand the pros and cons of the saliency measures (e.g. [50]). Many of the current saliency methods compete closely with one another at the top of the existing benchmarks and performances vary in a narrow band (See Figures 3 & 4).…”
Section: Discussion and Outlookmentioning
confidence: 99%
“…Once a model detects the main salient regions in an image, it is necessary to validate its performance over ground‐truth data. There are several metrics commonly used in this field and standardized so different models can be compared, although consistent results cannot always be obtained . Depending on the application and the kind of data used for validation, some metrics can be more appropriate than others.…”
Section: Methodsmentioning
confidence: 99%
“…There are several metrics commonly used in this field and standardized so different models can be compared, although consistent results cannot always be obtained. 34 Depending on the application and the kind of data used for validation, some metrics can be more appropriate than others. We decided to use the following three metrics for our experiment:…”
Section: Validationmentioning
confidence: 99%
“…neural and behavioral models (e.g. [19,8,9,12]). Since 2010 Matthias Bethge has been the director of the Bernstein center and since 2016 vice chair of the national Bernstein network.…”
Section: • Design and Implement Submission And Evaluation Backend • Advmentioning
confidence: 99%