2018
DOI: 10.1152/jn.00059.2018
|View full text |Cite
|
Sign up to set email alerts
|

Dissociable signatures of visual salience and behavioral relevance across attentional priority maps in human cortex

Abstract: Computational models posit that visual attention is guided by activity within spatial maps that index the image-computable salience and the behavioral relevance of objects in the scene. These spatial maps are theorized to be instantiated as activation patterns across a series of retinotopic visual regions in occipital, parietal, and frontal cortex. Whereas previous research has identified sensitivity to either the behavioral relevance or the image-computable salience of different scene elements, the simultaneo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

12
61
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 55 publications
(78 citation statements)
references
References 90 publications
12
61
2
Order By: Relevance
“…41 The neural instantiation of a priority mapcomposed of the representations that guide where and when the eyes move-is focused on a network of regions that include the lateral intraparietal area (area LIP), 42,43 frontal eye fields (FEFs), 44 and superior colliculus (SC), 45,46 all of which exhibit prioritized representations of visual space and activity that is crucial for the guidance and control of eye movements. [47][48][49][50] A complementary network of regions that includes the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC), and supplementary eye field (SEF) is thought to be involved in the cognitive control of saccades, [51][52][53][54] providing additional goal-directed inputs to the FEF and SC.…”
Section: Models Of Oculomotor Controlmentioning
confidence: 99%
“…41 The neural instantiation of a priority mapcomposed of the representations that guide where and when the eyes move-is focused on a network of regions that include the lateral intraparietal area (area LIP), 42,43 frontal eye fields (FEFs), 44 and superior colliculus (SC), 45,46 all of which exhibit prioritized representations of visual space and activity that is crucial for the guidance and control of eye movements. [47][48][49][50] A complementary network of regions that includes the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC), and supplementary eye field (SEF) is thought to be involved in the cognitive control of saccades, [51][52][53][54] providing additional goal-directed inputs to the FEF and SC.…”
Section: Models Of Oculomotor Controlmentioning
confidence: 99%
“…Perhaps most importantly, many studies using IEMs seek to compare channel response profiles, or basis-weighted 'image' reconstructions, across task conditions or timepoints in a trial. As described by Sprague et al (2018), these studies employ a fixed encoding model, such that activation patterns from different conditions are transformed into the same modeled information space, using a single common estimated encoding model (and often that encoding model is estimated using data from a completely different training task, e.g. Sprague et al, 2014Sprague et al, , 2016Sprague et al, , 2018b.…”
Section: Differences Between Conditions Are Preserved Across Linear Tmentioning
confidence: 99%
“…As described by Sprague et al (2018), these studies employ a fixed encoding model, such that activation patterns from different conditions are transformed into the same modeled information space, using a single common estimated encoding model (and often that encoding model is estimated using data from a completely different training task, e.g. Sprague et al, 2014Sprague et al, , 2016Sprague et al, , 2018b. In this case, the criticisms raised by Liu et al (2018) and Gardner & Liu (2019) do not apply: any arbitrary linear transforms would be applied equivalently to the results from each condition; and differences between conditions would be transformed from participantand stimulus-specific measurement space into the same model-based 'information' space.…”
Section: Differences Between Conditions Are Preserved Across Linear Tmentioning
confidence: 99%
See 2 more Smart Citations