1990
DOI: 10.1162/jocn.1990.2.3.213
|View full text |Cite
|
Sign up to set email alerts
|

Simulating Visual Attention

Abstract: Selective visual attention serializes the processing of stimulus data to make efficient use of limited processing resources in the human visual system. This paper describes a connectionist network that exhibits a variety of attentional phenomena reported by Treisman, Wolford, Duncan, and others. As demonstrated in several simulations, a hierarchical, multiscale network that uses feature arrays with strong lateral inhibitory connections provides responses in agreement with a number of prominent behaviors associ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

1991
1991
2015
2015

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(13 citation statements)
references
References 46 publications
0
13
0
Order By: Relevance
“…Recently; Sandon (1990) developed a neural network, based on a connectionist model, to simulate selective visual attention. The model is hierarchical, as it contains multiple processing layers that enable features to be extracted, focused, and then extracted again (Grossberg, 1988, with permission) is among the first neural-network-computational frameworks to incorporate attentional considerations.…”
Section: Simulations Of Visual Attentionmentioning
confidence: 99%
“…Recently; Sandon (1990) developed a neural network, based on a connectionist model, to simulate selective visual attention. The model is hierarchical, as it contains multiple processing layers that enable features to be extracted, focused, and then extracted again (Grossberg, 1988, with permission) is among the first neural-network-computational frameworks to incorporate attentional considerations.…”
Section: Simulations Of Visual Attentionmentioning
confidence: 99%
“…However, those systems served as source of inspiration for practical solutions for real images once machine learning techniques like neural networks had grown mature enough. In [206], image processing operators were combined with the attentive models to make it applicable to more realistic images. He applies a Laplacian-of-Gaussians (LoG) like operator to feature maps to model the receptive fields and enhance the interesting events.…”
Section: Feature Detection As Part Of the Pre-attentive Stagementioning
confidence: 99%
“…In this way, the scale space theory (Lindeberg, n.d.;Witkin, 1983) can be used towards accelerating visual processing, generally on a coarse to fine approach. Several works use this approach based on multi-resolution (Itti et al, 1998;Sandon, 1990;1991;Tsotsos et al, 1995) for allowing vision tasks to be executed in computers. Other variants, as the Laplacian pyramid (Burt, 1988), have been also integrated as a tool for visual processing, mainly in attention tasks (Tsotos, 1987;Tsotsos, 1987).…”
Section: Related Workmentioning
confidence: 99%