2013
DOI: 10.1016/j.image.2013.03.009
|View full text |Cite
|
Sign up to set email alerts
|

RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
123
0
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 167 publications
(125 citation statements)
references
References 28 publications
1
123
0
1
Order By: Relevance
“…Rather than considering a uniform saliency map as input of (Le Meur & Liu, 2015)'s model, as we did in the previous section, we use a saliency map 430 which is the average of the saliency maps computed by two well-known saliency models, namely (Harel et al, 2006) and (Riche et al, 2013). Combining (Harel et al, 2006) and (Riche et al, 2013) models (called Top2(R+H) in Table 2) significantly increases the performance, compared to the best performing saliency model, i.e.…”
Section: Bottom-up Salience and Viewing Biases For Predicting Visual mentioning
confidence: 99%
See 1 more Smart Citation
“…Rather than considering a uniform saliency map as input of (Le Meur & Liu, 2015)'s model, as we did in the previous section, we use a saliency map 430 which is the average of the saliency maps computed by two well-known saliency models, namely (Harel et al, 2006) and (Riche et al, 2013). Combining (Harel et al, 2006) and (Riche et al, 2013) models (called Top2(R+H) in Table 2) significantly increases the performance, compared to the best performing saliency model, i.e.…”
Section: Bottom-up Salience and Viewing Biases For Predicting Visual mentioning
confidence: 99%
“…Combining (Harel et al, 2006) and (Riche et al, 2013) models (called Top2(R+H) in Table 2) significantly increases the performance, compared to the best performing saliency model, i.e. (Riche et al, 2013)'s model (see (Le Meur & Liu, 2014) for more 435 details on saliency aggregation).…”
Section: Bottom-up Salience and Viewing Biases For Predicting Visual mentioning
confidence: 99%
“…To investigate this point, we select 8 state-of-the-art models (GBVS [3], Judd [14], RARE2012 [15], AWS [5], Le Meur [4], Bruce [7], Hou [8] and Itti [6]) and aggregate their saliency maps into a unique one. The following subsections present the tested aggregation methods.…”
Section: Context and Problemmentioning
confidence: 99%
“…Different algorithms are used to train the best way to combine together saliency maps. (c) Itti [6] (d) Le Meur [4] (e) GBVS [3] (f) Hou [8] (g) Bruce [7] (h) Judd [14] (i) AWS [5] (j) RARE2012 [15] …”
Section: Context and Problemmentioning
confidence: 99%
See 1 more Smart Citation