2018
DOI: 10.1523/jneurosci.1620-17.2018
|View full text |Cite
|
Sign up to set email alerts
|

Deep Neural Networks for Modeling Visual Perceptual Learning

Abstract: Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. Although existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
59
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(77 citation statements)
references
References 67 publications
5
59
1
Order By: Relevance
“…In a more complicated read-out scenario with multiple read-out units that do not share feedforward weights, no interference at all is expected, which also conflicts with our empirical data. Notably, it has recently been shown that exposing a purely feedforward convolutional neural network to orientation discrimination training can sharpen tuning curves in early layers of the model (reflecting V1-and V2-like processing; Wenliang & Seitz, 2018). The convolutional neural network has, however, not been used to investigate interference.…”
Section: Discussionmentioning
confidence: 99%
“…In a more complicated read-out scenario with multiple read-out units that do not share feedforward weights, no interference at all is expected, which also conflicts with our empirical data. Notably, it has recently been shown that exposing a purely feedforward convolutional neural network to orientation discrimination training can sharpen tuning curves in early layers of the model (reflecting V1-and V2-like processing; Wenliang & Seitz, 2018). The convolutional neural network has, however, not been used to investigate interference.…”
Section: Discussionmentioning
confidence: 99%
“…A method that can potentially differentiate between the two models is an inactivation experiment. In a recent study, Liu and Pack (2017) showed that inactivation of middle temporal area (MT) in the visual dorsal pathway influenced motion perception thresholds only after a monkey was trained on a stimulus that better stimulated MT (Liu and Pack, 2017). This training, according to the reweighting model, increases the readout weight of MT neurons as the most informative neurons for the task.…”
Section: Review Of Wenliang and Seitzmentioning
confidence: 99%
“…Therefore, in the architecture used by Wenliang and Seitz (2018), the DNN is not able to explain this inactivation result. However, in an extended DNN with skip connections, the inactivation experiment can be simulated and compared with the empirical findings (Liu and Pack, 2017).…”
Section: Review Of Wenliang and Seitzmentioning
confidence: 99%
“…Similarly, many artificial neural networks lack biological detail, resembling biological neural networks only in seemingly coarsescale ways. Despite this, several researchers have recently argued that these networks provide useful insights into neural processing, particularly within the visual system (Kriegeskorte, 2015;Wenliang & Seitz, 2018;Yamins & DiCarlo, 2016). Readers interested in the relationships between artificial and biological neural networks may see Churchland and Sejnowski (2017), Kriegeskorte and Golan (2019), Marblestone, Wayne, and Kording (2016), and Yamins and DiCarlo (2016).…”
Section: Introductionmentioning
confidence: 99%
“…First, visual features in the models were extracted using a network with alternating layers of convolutional and pooling units (followed by fully connected layers) whose processing is reminiscent of processing in visual cortical areas. Previous researchers have found that processing in networks using convolution and pooling resembles biological visual processing in intriguing ways (Kriegeskorte, 2015;Wenliang & Seitz, 2018;Yamins & DiCarlo, 2016). Second, the models make explicit use of efficient data compression to learn compact representations of perceptual data items.…”
Section: Introductionmentioning
confidence: 99%