Proceedings 15th International Conference on Pattern Recognition. ICPR-2000
DOI: 10.1109/icpr.2000.905356
|View full text |Cite
|
Sign up to set email alerts
|

Computing visual attention from scene depth

Abstract: Visual attention is the ability to rapidly detect the inter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
58
0

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 101 publications
(59 citation statements)
references
References 4 publications
1
58
0
Order By: Relevance
“…Jost et al [10] run similar experiments on a much larger number of test persons and could measure the quantitative improvement of the model when chromaticity channels are added to the conventional monochrome video channels. Visual attention in 3D scenes was first considered in [11] and recently, a visual attention model for 3D was quantitatively analyzed in presence of various synthetic and natural scenes [12]. This paper presents a more global analysis, where the performance of a family of visual attention models in presence of 3D color scenes is evaluated.…”
Section: Introductionmentioning
confidence: 99%
“…Jost et al [10] run similar experiments on a much larger number of test persons and could measure the quantitative improvement of the model when chromaticity channels are added to the conventional monochrome video channels. Visual attention in 3D scenes was first considered in [11] and recently, a visual attention model for 3D was quantitatively analyzed in presence of various synthetic and natural scenes [12]. This paper presents a more global analysis, where the performance of a family of visual attention models in presence of 3D color scenes is evaluated.…”
Section: Introductionmentioning
confidence: 99%
“…This is a task-dependent use of depth information and not a bottom up integration of depth features in the computation of saliency. Ouerhani and Hügli extend the approach of Itti et al from [12] by adding a conspicuity map built directly from the depth map [20]. This approach treats depth as just another channel, along with color and other cues.…”
Section: Saliency Computation With Depthmentioning
confidence: 99%
“…Maki et al [40] took particular note of the feature whereby regions that are closer in terms of depth are more salient [41], and they proposed a model for estimating human visual attention that integrates several features obtained from binocular cameras such as motion and disparity. Ouerhani and Hugli [42] directly incorporated a depth feature taken from a range finder into the Itti saliency map model. Jeong et al [43] proposed a model for computing saliency maps from every image taken from binocular cameras and correcting the maps with the help of disparity information in highly salient regions.…”
Section: Fit-based Computational Modelsmentioning
confidence: 99%