2007
DOI: 10.1016/s0079-6123(06)65005-x
|View full text |Cite
|
Sign up to set email alerts
|

Attention in hierarchical models of object recognition

Abstract: Object recognition and visual attention are tightly linked processes in human perception. Over the last three decades, many models have been suggested to explain these two processes and their interactions, and in some cases these models appear to contradict each other. We suggest a unifying framework for object recognition and attention and review the existing modeling literature in this context. Furthermore, we demonstrate a proof-of-concept implementation for sharing complex features between recognition and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
31
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(34 citation statements)
references
References 99 publications
(136 reference statements)
1
31
0
Order By: Relevance
“…In other words, an attention map could multiplicatively gate only features that are positioned at attended locations. This is consistent with recent findings in the computational modeling of object recognition, which have shown that attention reduces interference and increases performance in object recognition in multiobject visual images (Fazl, Grossberg, & Mingolla, 2009;Walther & Koch, 2007). It should be noted that the attention map might also serve to read out information from the inferior and the superior IPS during retrieval.…”
Section: Computation In a Cortical Microcircuitsupporting
confidence: 91%
“…In other words, an attention map could multiplicatively gate only features that are positioned at attended locations. This is consistent with recent findings in the computational modeling of object recognition, which have shown that attention reduces interference and increases performance in object recognition in multiobject visual images (Fazl, Grossberg, & Mingolla, 2009;Walther & Koch, 2007). It should be noted that the attention map might also serve to read out information from the inferior and the superior IPS during retrieval.…”
Section: Computation In a Cortical Microcircuitsupporting
confidence: 91%
“…Specifically, IPS may represent a computational hub that integrates auditory input from the auditory parabelt (Pandya and Kuypers 1969; Divac et al 1977; Hyvarinen 1982) and forms a relay station between the sensory and prefrontal cortex, which associates sensory signals with behavioral meaning (Petrides and Pandya 1984; Fritz et al 2010). Similar computational operations have been attributed to the parietal cortex in saliency map models of visual feature search (Gottlieb et al 1998; Itti and Koch 2001; Walther and Koch 2007; Geng and Mangun 2009). Overall our results suggest that IPS plays an automatic, bottom-up role in auditory figure-ground processing, and call for a re-examination of the prevailing assumptions regarding the neural computations and circuits that mediate auditory scene analysis.…”
Section: Evoked Transition Responsesmentioning
confidence: 64%
“…However, search for parameters to include the high-level mechanisms into model gaze attraction function is unsolved task up to now (Walther & Koch, 2007). One of the approaches to this goal can consist in detailed model-based investigation of low-level parameters contribution in scan path formation.…”
Section: Dynamics Of Model Scan Path While Changing Input Window Strumentioning
confidence: 99%
“…The search for eye movement parameters which allow us to estimate the visual task to be solved during the current stage of dynamical image processing, and evaluation of the contribution of dominating components of visual attention is to be unsolved objectives up to now in both experimental and modeling studies (Carrasco, 2011;Lupianez, Klein, & Bartolomeo, 2006;Navalpakkam & Itti, 2005;Walther & Koch, 2007;Wang & Theeuwes, 2012;Wolfe, Birnkrant, Kunar, & Horowitz, 2005;Zelinsky, 2005).…”
Section: Introductionmentioning
confidence: 99%