2006 International Conference on Image Processing 2006
DOI: 10.1109/icip.2006.312489
|View full text |Cite
|
Sign up to set email alerts
|

A Rarity-Based Visual Attention Map - Application to Texture Description

Abstract: This paper describes a simple and "pre-cortical" visual attention model, which does not take image directions into account. We compute rarity-based saliency maps and then we describe the relation between texture and visual attention. Finally we decompose the image into several textures with different regularities. Our purpose is to compress textures into images using small repeating patterns.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2009
2009
2018
2018

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 29 publications
(29 citation statements)
references
References 8 publications
(8 reference statements)
0
27
0
Order By: Relevance
“…Over the past decade, many different algorithms have been proposed to compute visual saliency maps from digital imagery. 17,[28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47] These algorithms typically transform a given input image into a scalar-valued map in which local signal intensity corresponds to local image saliency. 33,45 In an extensive comparative evaluation study 48 we recently established that the maximum value over the target support computed by Achanta's Frequency Tuned Saliency model 45 is currently the best saliency-based predictor of human visual search and detection performance in complex realistic scenarios.…”
Section: Frequency-tuned Saliencymentioning
confidence: 99%
“…Over the past decade, many different algorithms have been proposed to compute visual saliency maps from digital imagery. 17,[28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47] These algorithms typically transform a given input image into a scalar-valued map in which local signal intensity corresponds to local image saliency. 33,45 In an extensive comparative evaluation study 48 we recently established that the maximum value over the target support computed by Achanta's Frequency Tuned Saliency model 45 is currently the best saliency-based predictor of human visual search and detection performance in complex realistic scenarios.…”
Section: Frequency-tuned Saliencymentioning
confidence: 99%
“…As we already stated in [9] and [14] a feature does not attract attention by itself: bright and dark, locally contrasted areas or not, red or blue can equally attract human attention depending on their context. In the same way, motion can be as interesting as the lack of motion depending on the context.…”
Section: A Rarity-based Approachmentioning
confidence: 82%
“…All those words are actually synonyms and they all amount to searching for some unusual features in a given context which is here a spatial one. The proposed method is an evolution of [7,8,9] and also uses the local contrast and global rarity measure. In the next section, the RARE algorithm is described and a comparison with the state of the art methods is proposed.…”
Section: Introductionmentioning
confidence: 99%