2017
DOI: 10.1109/tpami.2016.2567391
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Whitening Saliency

Abstract: General dynamic scenes involve multiple rigid and flexible objects, with relative and common motion, camera induced or not. The complexity of the motion events together with their strong spatio-temporal correlations make the estimation of dynamic visual saliency a big computational challenge. In this work, we propose a computational model of saliency based on the assumption that perceptual relevant information is carried by high-order statistical structures. Through whitening, we completely remove the second-o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
58
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 92 publications
(63 citation statements)
references
References 55 publications
0
58
0
Order By: Relevance
“…Zhu et al [24] proposed a robust background measure to characterize the spatial layout of the image, and then proposed an optimized framework to integrate low-level cues. Leboran et al [25] developed a dynamic adaptive whitening saliency, which is based on high level statistical structures. Recently, Wang et al [26] developed a saliency model by combining 13 existing state-of-the-art saliency models.…”
Section: Introductionmentioning
confidence: 99%
“…Zhu et al [24] proposed a robust background measure to characterize the spatial layout of the image, and then proposed an optimized framework to integrate low-level cues. Leboran et al [25] developed a dynamic adaptive whitening saliency, which is based on high level statistical structures. Recently, Wang et al [26] developed a saliency model by combining 13 existing state-of-the-art saliency models.…”
Section: Introductionmentioning
confidence: 99%
“…In that respect, to present the same stimuli with several observations for each feature contrast and distinct cueing would reveal absolute influences from endogenous guidance. Our study could be extended by analyzing the influence of dynamic scenes on saliency modeling [Leboran et al, 2017] [Riche and Mancas, 2016b] using synthetic videos with both static or dynamic camera. In that direction, it would be able to see the interaction between low-level visual features and temporally-variant features.…”
Section: Future Workmentioning
confidence: 99%
“…Integration of visual saliency in CNN-based content-based image representation is a major trend nowadays. The main idea is to generate a saliency map that represents the most salient regions of the input image, without any prior assumption, based on various criteria such as color-and texture-contrast [17], [18]. Recent works have shown that using as input to the CNN a salient, instead of a center/resized crop, image provides better classification results [19].…”
Section: Cnn Input and Saliency Mapsmentioning
confidence: 99%
“…Recent works have shown that using as input to the CNN a salient, instead of a center/resized crop, image provides better classification results [19]. In this work we have employed the static version of the adaptive whitening saliency (AWS) methodology, which has shown superior performance in predicting human attention [17]. Recently, this method was applied for keyframe extraction from video shots of LC operations [20].…”
Section: Cnn Input and Saliency Mapsmentioning
confidence: 99%