2020
DOI: 10.3389/fcomm.2020.00017
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Morphological Priming in German With Naive Discriminative Learning

Abstract: Both localist and connectionist models, based on experimental results obtained for English and French, assume that the degree of semantic compositionality of a morphologically complex word is reflected in how it is processed. Since priming experiments using English and French morphologically related prime-target pairs reveal stronger priming when complex words are semantically transparent (e.g., refill-fill) compared to semantically more opaque pairs (e.g., restrain-strain), localist models set up connections … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(24 citation statements)
references
References 90 publications
(165 reference statements)
1
16
0
Order By: Relevance
“…The Delta rule or Rescorla-Wagner learning equations are a mathematically simple implementation that allows for relatively transparent modelling of learning. Furthermore, the Rescorla-Wagner learning equations have repeatedly proven to be effective in modelling language processing at various levels, including modelling child language acquisition ( Ramscar et al, 2010b ; Ramscar, Dye, & Klein, 2013a ), for disentangling linguistic maturation from cognitive decline over the lifespan ( Ramscar, Hendrix, Shaoul, Milin, & Baayen, 2014 ; Ramscar, Sun, Hendrix, & Baayen, 2017 ), for predicting reaction times in the visual lexical decision task ( Baayen et al, 2011 ; Baayen & Smolka, 2020 ) and self-paced reading ( Milin, Feldman, Ramscar, Hendrix, & Baayen, 2017 ), as well as for auditory comprehension ( Arnold, Tomaschek, Sering, Ramscar, & Baayen, 2017 ; Baayen, Shaoul, Willits, & Ramscar, 2016a ; Shafaei-Bajestan & Baayen, 2018 ), for predicting the performance of learning of morphology ( Divjak, Milin, Ez-zizi, Józefowski, & Adam, 2020 ; Ramscar, Dye, Popick, & O'Donnell-McCarthy, 2011 ; Ramscar & Yarlett, 2007 ), for predicting fine phonetic detail during speech production ( Tomaschek, Plag, Ernestus, & Baayen, 2019 ) and for predicting second-language learning of speech sounds ( Nixon, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…The Delta rule or Rescorla-Wagner learning equations are a mathematically simple implementation that allows for relatively transparent modelling of learning. Furthermore, the Rescorla-Wagner learning equations have repeatedly proven to be effective in modelling language processing at various levels, including modelling child language acquisition ( Ramscar et al, 2010b ; Ramscar, Dye, & Klein, 2013a ), for disentangling linguistic maturation from cognitive decline over the lifespan ( Ramscar, Hendrix, Shaoul, Milin, & Baayen, 2014 ; Ramscar, Sun, Hendrix, & Baayen, 2017 ), for predicting reaction times in the visual lexical decision task ( Baayen et al, 2011 ; Baayen & Smolka, 2020 ) and self-paced reading ( Milin, Feldman, Ramscar, Hendrix, & Baayen, 2017 ), as well as for auditory comprehension ( Arnold, Tomaschek, Sering, Ramscar, & Baayen, 2017 ; Baayen, Shaoul, Willits, & Ramscar, 2016a ; Shafaei-Bajestan & Baayen, 2018 ), for predicting the performance of learning of morphology ( Divjak, Milin, Ez-zizi, Józefowski, & Adam, 2020 ; Ramscar, Dye, Popick, & O'Donnell-McCarthy, 2011 ; Ramscar & Yarlett, 2007 ), for predicting fine phonetic detail during speech production ( Tomaschek, Plag, Ernestus, & Baayen, 2019 ) and for predicting second-language learning of speech sounds ( Nixon, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…Speech recognition has been modeled with more than two layers (Beguš, Gašper 2021) with only modest success, whereas models with only two dense layers are more successful (Arnold et al 2017;Shafaei-Bajestan et al 2020). Multilingual acquisition has also sucessfully been modeled with only two layers (Chuang et al 2021), as well as several other lexical processing phenomena (Baayen et al 2019b;Baayen & Smolka 2020;Chuang et al 2020c;Tomaschek et al 2019). However, Boersma et al (2020) argue that it is necessary to assume more layers to model phonetic and phonological knowledge.…”
Section: Discussionmentioning
confidence: 99%
“…We found that the presentation of both derived and inflected primes led to larger RT variability as compared to unrelated primes, but inflected primes indeed increased RT variability more so than derived primes. While all major morphological processing accounts-i.e., affix stripping (Rastle et al, 2004;Taft & Forster, 1975), embedded word activation (Grainger & Beyersmann, 2017), and form-and-meaning approaches (Baayen et al, 2011;Baayen & Smolka, 2020;Feldman, 2000)-can theoretically explain the priming effect on mean RTs, they all need some refinement in order to accommodate for differences in RT variability following different types of morphologically complex primes. For this, it is useful to look at theories of lexical processing that specifically tried to capture variability in response times.…”
Section: Discussionmentioning
confidence: 99%
“…Alternatively, it has been suggested that morphological decomposition relies on a stem-activation mechanism, which extracts edge-aligned embedded stems from letter strings (Grainger & Beyersmann, 2017). Finally, morphological priming effects have been also explained in terms of shared semantic and orthographic properties of prime and target, hence without positing a level of morphological representation (Baayen et al, 2011;Baayen & Smolka, 2020;Feldman, 2000).…”
Section: Introductionmentioning
confidence: 99%