2022
DOI: 10.21203/rs.3.rs-2289281/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convolutional Neural Networks Trained to Identify Words Provide a Good Account of Visual Form Priming Effects

Abstract: A wide variety of orthographic coding schemes and models of visual word identification have been developed to account for masked priming data that provide a measure of orthographic similarity between letter strings. These models tend to include hand-coded orthographic representations with single unit coding for specific forms of knowledge (e.g., units coding for a letter in a given position or a letter sequence). Here we assess how well a range of these coding schemes and models account for the pattern of form… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 26 publications
(36 reference statements)
0
1
0
Order By: Relevance
“…Using deep neural networks to evaluate our oPE assumptions follows the idea of using computer vision models as research artifacts (Ma & Peters, 2020). This class of models was successfully used to investigate potential object recognition architectures potentially implemented in human object recognition (Geirhos et al, 2017; Ma & Peters, 2020; Lindsay, 2021) or more specific visual word and letter recognition (Hannagan, Agrawal, Cohen, & Dehaene, 2021; Testolin, Stoianov, & Zorzi, 2017; LeCun et al, 1989; Yin, Biscione, & Bowers, 2023). Here, we are only interested in the model architectures so far as we wanted to use a model with recurrent connections (i.e., top-down connections as assumed in predictive coding accounts, e.g., see Rao & Ballard, 1999; Gagl et al, 2020) and batch-normalization (Laurent et al, 2016; Cooijmans, Ballas, Laurent, Gülçehre, & Courville, 2016; Lu, Sindhwani, & Sainath, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…Using deep neural networks to evaluate our oPE assumptions follows the idea of using computer vision models as research artifacts (Ma & Peters, 2020). This class of models was successfully used to investigate potential object recognition architectures potentially implemented in human object recognition (Geirhos et al, 2017; Ma & Peters, 2020; Lindsay, 2021) or more specific visual word and letter recognition (Hannagan, Agrawal, Cohen, & Dehaene, 2021; Testolin, Stoianov, & Zorzi, 2017; LeCun et al, 1989; Yin, Biscione, & Bowers, 2023). Here, we are only interested in the model architectures so far as we wanted to use a model with recurrent connections (i.e., top-down connections as assumed in predictive coding accounts, e.g., see Rao & Ballard, 1999; Gagl et al, 2020) and batch-normalization (Laurent et al, 2016; Cooijmans, Ballas, Laurent, Gülçehre, & Courville, 2016; Lu, Sindhwani, & Sainath, 2016).…”
Section: Discussionmentioning
confidence: 99%