2011
DOI: 10.1007/978-3-642-21735-7_3
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Generative Model of Recurrent Object-Based Attention in the Visual Cortex

Abstract: Abstract. In line with recent work exploring Deep Boltzmann Machines (DBMs) as models of cortical processing, we demonstrate the potential of DBMs as models of object-based attention, combining generative principles with attentional ones. We show: (1) How inference in DBMs can be related qualitatively to theories of attentional recurrent processing in the visual cortex; (2) that deepness and topographic receptive fields are important for realizing the attentional state; (3) how more explicit attentional suppre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…Shapes. We use the simple Shapes dataset [24] to examine the basic properties of our system. It consists of 60,000 (train) + 10,000 (test) binary images of size 20x20.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Shapes. We use the simple Shapes dataset [24] to examine the basic properties of our system. It consists of 60,000 (train) + 10,000 (test) binary images of size 20x20.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Image caption is a technique that attempts to "translate" images into text using machine learning methods [2,3]. The research history of the image caption problem is only ten years old, but the image caption model has experienced many major changes.…”
Section: Image Captionmentioning
confidence: 99%
“…Larochelle and Hinton [8] proposed using Boltzmann Machines that choose where to look next to find locations of the most informative intra-class objects, even if they are far away in the image. Reichert et al proposed a hierarchical model to show that certain aspects of attention can be modeled by Deep Boltzmann Machines [14]. Attention-based models were also proposed for generative models.…”
Section: Self-attention Mechanismmentioning
confidence: 99%