2023
DOI: 10.1353/lan.2023.a900087
|View full text |Cite
|
Sign up to set email alerts
|

A discriminative lexicon approach to word comprehension, production, and processing: Maltese plurals

Abstract: Comprehending and producing words is a natural process for human speakers. In linguistic theory, investigating this process formally and computationally is often done by focusing on forms only. By moving beyond the world of forms, we show in this study that the Discriminative Lexicon (DL) model operating with word comprehension as a mapping of form onto meaning and word production as a mapping of meaning onto form generates accurate predictions about what meanings listeners understand and what forms speakers p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 64 publications
0
5
0
Order By: Relevance
“…Therefore, it is important to note that the DLM's performance also depends on many modeling choices, such as the chosen form granularity, semantic vectors etc. Ideal modeling choices can often vary across languages—for instance, while for English, Dutch and German, trigrams are often the unit of choice (Heitmeier et al, 2021 , 2023b ), previous work has found that for Vietnamese, bigrams are preferable (Pham and Baayen, 2015 ), while for Maltese, Kinyarwanda and Korean, form representations based on syllables perform well (Nieder et al, 2023 ; van de Vijver et al, 2023 , Chuang et al, 2022 ; an in-depth discussion of the various considerations when modeling a language with the DLM can be found in Heitmeier et al, 2021 ). While the present study is limited to Dutch, Mandarin and English, future work should further verify the efficacy of FIL on morphologically more diverse languages.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, it is important to note that the DLM's performance also depends on many modeling choices, such as the chosen form granularity, semantic vectors etc. Ideal modeling choices can often vary across languages—for instance, while for English, Dutch and German, trigrams are often the unit of choice (Heitmeier et al, 2021 , 2023b ), previous work has found that for Vietnamese, bigrams are preferable (Pham and Baayen, 2015 ), while for Maltese, Kinyarwanda and Korean, form representations based on syllables perform well (Nieder et al, 2023 ; van de Vijver et al, 2023 , Chuang et al, 2022 ; an in-depth discussion of the various considerations when modeling a language with the DLM can be found in Heitmeier et al, 2021 ). While the present study is limited to Dutch, Mandarin and English, future work should further verify the efficacy of FIL on morphologically more diverse languages.…”
Section: Discussionmentioning
confidence: 99%
“…The initial stage of speech production is modeled as involving a mapping in the opposite direction, starting with a high-dimensional semantic vector (known as embeddings in computational linguistics) and targeting a vector specifying which phone combinations drive articulation. The DLM has been successful in modeling a range of different morphological systems (e.g., Chuang et al, 2020 , 2022 ; Denistia and Baayen, 2021 ; Heitmeier et al, 2021 ; Nieder et al, 2023 ) as well as behavioral data such as acoustic durations (Schmitz et al, 2021 ; Stein and Plag, 2021 ; Chuang et al, 2022 ), (primed) lexical decision reaction times (Gahl and Baayen, 2023 ; Heitmeier et al, 2023b ), and data from patients with aphasia (Heitmeier and Baayen, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…Error-driven learning is a domain of general learning theory that has been applied to many topics in cognition (Hoppe et al, 2022) and has recently been applied to language (Baayen, Chuang, & Blevins, 2018;Baayen, Chuang, Shafaei-Bajestan, & Blevins, 2019;Baayen, Hendrix, & Ramscar, 2013;Baayen, Milin, Ðurđević, Hendrix, & Marelli, 2011;Baayen, Shaoul, Willits, & Ramscar, 2016a;Chuang et al, 2021;Denistia & Baayen, 2023;Nieder, Tomaschek, Cohrs, & van de Vijver, 2021;Nieder, Chuang, van de Vijver, & Baayen, 2023;van de Vijver & Uwambayinema, 2022;van de Vijver, Uwambayinema, & Chuang, 2024), and language learning (Divjak, Milin, Ez-zizi, Józefowski, & Adam, 2021;Ellis, 2006;Harmon, Idemaru, & Kapatsinski, 2019;Nixon, 2020;Ramscar, Dye, & McCauley, 2013;Ramscar & Yarlett, 2007;Ramscar et al, 2010;Romain, Ez-zizi, Milin, & Divjak, 2022).…”
Section: Error-driven Learning In Languagementioning
confidence: 99%
“…While semantic outcomes are often discrete (Kapatsinski, 2023b), recent studies have moved away from discrete representations. Nieder et al (2023), for example, use continuous cue-outcome representations to model Maltese inflection, while Heitmeier et al (2023) successfully modeled trial-by-trial effects of a lexical decision experiment in the same way. These studies use one-hot encoded vectors to represent phonology, and word embeddings to represent meaning, replacing the Rescorla-Wagner equations with the similar, but computationally more powerful Widrow-Hoff delta rule (Widrow & Hoff, 1960).…”
Section: Implications For Natural Language Learningmentioning
confidence: 99%
“…The DLM posits simple modality-specific mappings between numeric representations of words’ forms and numeric representations of their meanings ( Baayen et al, 2018 , Baayen et al, 2019 ). The DLM has been successful both in modelling different morphological systems across a range of languages, such as Latin, English, German, Estonian, Korean and Maltese ( Baayen et al, 2018 , Baayen et al, 2019 , Chuang et al, 2023b , Chuang, Lõo et al, 2020 , Heitmeier et al, 2021 , Nieder et al, 2023 ), but at the same time also at modelling a range of behavioural data ( Cassani et al, 2020 , Chuang, Vollmer et al, 2020 , Heitmeier and Baayen, 2020 , Heitmeier et al, 2021 , Schmitz et al, 2021 , Shafaei-Bajestan et al, 2021 , Stein and Plag, 2021 ). It implements learning using an error-driven learning rule for continuous data ( Milin, Madabushi et al, 2020 , Widrow and Hoff, 1960 ) which is closely related to the later developed Rescorla-Wagner rule.…”
Section: Introductionmentioning
confidence: 99%