2012
DOI: 10.1007/s12559-012-9146-3
|View full text |Cite
|
Sign up to set email alerts
|

Improving Visual Saliency by Adding ‘Face Feature Map’ and ‘Center Bias’

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
42
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 54 publications
(43 citation statements)
references
References 45 publications
1
42
0
Order By: Relevance
“…This bias is currently introduced in computational modeling through ad hoc methods. In (Le Meur et al, 2006;Marat et al, 2013), the saliency map is simply multiplied by a 2D anisotropic…”
Section: Discussionmentioning
confidence: 99%
“…This bias is currently introduced in computational modeling through ad hoc methods. In (Le Meur et al, 2006;Marat et al, 2013), the saliency map is simply multiplied by a 2D anisotropic…”
Section: Discussionmentioning
confidence: 99%
“…Nevertheless, the model does not include any static cues as colors, for example. Description: The STVSM model [10] is inspired by the biology of the visual system and breaks down each frame of a video into three maps: a static saliency map emphasizes regions that differ from their context in terms of luminance, orientation, and spatial frequency. A dynamic saliency map emphasizes moving regions with values proportional to motion amplitude.…”
Section: Smams: Saliency Models For Abnormal Motion Selection (2011)mentioning
confidence: 99%
“…GBVS [ 3 ] NMPT [ 4 ] SSOV [ 5 ] SDSR [6 ] VICO [ 7 ] SMVQA [ 8 ] SMAMS [9] STVSM [ 10 ] Fig. 10.1 Chronological overview of salient models for videos Indeed, camera motions has a great impact on saliency estimation, and models need to be specifically designed to manage the temporal aspect.…”
mentioning
confidence: 99%
“…In a similar vein, object knowledge can be used to top-down tune early salience. In particular, when dealing with faces within the scene, a face detection step can provide a reliable cue to complement early conspicuity maps, as it has been shown by Cerf et al [21], deCroon et al [27], Marat et al [65], or a useful prior for Bayesian integration with low level cues [13]. This is indeed an important issue since faces may drive attention in a direct fashion [20].…”
Section: Levels Of Representation and Controlmentioning
confidence: 99%
“…In this paper we are not much involved in discussing neurobiological underpinnings of computational theories, but, interestingly enough the approach of fusing object-based information with low-level salience, either through straightforward combination [21,65] or in the formal framework of Bayesian modelling [13,24,15] provides a computational account of the way the lateral intraparietal area (LIP) of posterior parietal cortex acts as a priority map to guide the allocation of covert attention and eye movements (overt attention). The LIP is a cortical area located at the interface between visual input and oculomotor output and it is well known that LIP activity is biased by both bottom-up stimulus driven factors and top-down cognitive influences.…”
Section: Levels Of Representation and Controlmentioning
confidence: 99%