2018
DOI: 10.1101/339200
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simultaneous representation of sensory and mnemonic information in human visual cortex

Abstract: We thank Aaron Jacobson at the UCSD center for functional Magnetic Resonance Imaging (CFMRI) for assistance with multi-band imaging protocols. We also thank Ruben van Bergen for assistance setting up an FSL/FreeSurfer retinotopy pipeline, Ahana Chakraborty for collecting the behavioral data, Vy Vo for discussions on statistical analyses, and Stephanie Nelli for feedback on the manuscript.

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 33 publications
2
5
0
Order By: Relevance
“…The importance of 2-D space in VWM is consistent with the clear maplike organization of 2-D spatial position across the cortical surface, which should result in less neural competition and more distinct representations as items are spaced farther apart (Engel, Glover, & Wandell, 1997; Grill-Spector & Malach, 2004; Maunsell & Newsome, 1987; Sereno et al, 1995; Sereno, Pitzalis, & Martinez, 2001; Talbot & Marshall, 1941). This general idea is consistent with a sensory-recruitment account, which proposes that early sensory cortex supports the maintenance of sensory information in working memory (D'Esposito & Postle, 2015; Emrich, Riggall, Larocque, & Postle, 2013; Harrison & Tong, 2009; Pasternak & Greenlee, 2005; Rademaker, Chunharas, & Serences, 2018; Serences, 2016; Serences, Ester, Vogel, & Awh, 2009; Sreenivasan, Curtis, & D'Esposito, 2014). Thus, overlap or competition between representations in retinotopic maps may impose limits on how well visual information is encoded and remembered (Emrich et al, 2013; Sprague, Ester, & Serences, 2014).…”
Section: Introductionsupporting
confidence: 82%
“…The importance of 2-D space in VWM is consistent with the clear maplike organization of 2-D spatial position across the cortical surface, which should result in less neural competition and more distinct representations as items are spaced farther apart (Engel, Glover, & Wandell, 1997; Grill-Spector & Malach, 2004; Maunsell & Newsome, 1987; Sereno et al, 1995; Sereno, Pitzalis, & Martinez, 2001; Talbot & Marshall, 1941). This general idea is consistent with a sensory-recruitment account, which proposes that early sensory cortex supports the maintenance of sensory information in working memory (D'Esposito & Postle, 2015; Emrich, Riggall, Larocque, & Postle, 2013; Harrison & Tong, 2009; Pasternak & Greenlee, 2005; Rademaker, Chunharas, & Serences, 2018; Serences, 2016; Serences, Ester, Vogel, & Awh, 2009; Sreenivasan, Curtis, & D'Esposito, 2014). Thus, overlap or competition between representations in retinotopic maps may impose limits on how well visual information is encoded and remembered (Emrich et al, 2013; Sprague, Ester, & Serences, 2014).…”
Section: Introductionsupporting
confidence: 82%
“…Here we demonstrate, within a single spatial working memory paradigm, that task requirements are a critical determinant of how and where WM is implemented in the brain. These data provide a partial unifying explanation for divergent prior findings that implicate different regions in visual WM (Bettencourt & Xu, 2015;Ester, Rademaker, & Sprague, 2016;Iamshchinina et al, 2021;Rademaker et al, 2019;Xu, 2018Xu, , 2020. More importantly, however, our data show that WM flexibly engages different cortical areas and coding formats, even in the context of a task that is commonly used to study a single underlying construct (i.e., visuo-spatial WM).…”
Section: Discussionsupporting
confidence: 54%
“…This was done to ensure that any condition-specific changes in the mean BOLD responses did not contribute to differences in classification accuracy (see Methods: Analysis: Spatial Position Decoding). We then sorted the continuous angular positions into 8 non-overlapping bins and used a decoding scheme with four binary classifiers, where each binary classifier was independently trained to discriminate between spatial positions that fell into bins separated by 180° (see Figure 2A and Rademaker et al, 2019). The final decoding accuracy for each task condition reflects the average decoding accuracy across all four of these binary classifiers, where chance is 50% (see Methods, Analysis: Spatial Position Decoding).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations