2018
DOI: 10.1017/s1366728918000688
|View full text |Cite
|
Sign up to set email alerts
|

The need for a universal computational model of bilingual word recognition and word translation

Abstract: Dijkstra, Wahl, Buytenhuijs, van Halem, Al-jibouri, de Korte, and Rekké (2018) present in their keynote article a promising computational model of word recognition and word production in monolinguals and bilinguals, called Multilink. We agree with the authors that the model is a “basis for the development of a more general computational model of word retrieval” (Dijkstra et al., 2018). However, it is also important that such a model is universal.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
10
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 7 publications
1
10
0
Order By: Relevance
“…First, this approach allows complete decoupling of the roles of phonology and orthography during learning so that any observed translation‐ambiguity effect can be unequivocally traced to the phonological level. Second, one may argue that the lexical system of different‐script multilinguals is organized differently from that of same‐script multilinguals because the nonoverlapping orthographies allow for functional separation between the languages (e.g., Goral, ; Jiang, ; van Heuven & Wen, ; but see Degani et al., ). Thus, results from same‐script multilinguals may not generalize to different‐script multilinguals.…”
Section: Background Literaturementioning
confidence: 99%
“…First, this approach allows complete decoupling of the roles of phonology and orthography during learning so that any observed translation‐ambiguity effect can be unequivocally traced to the phonological level. Second, one may argue that the lexical system of different‐script multilinguals is organized differently from that of same‐script multilinguals because the nonoverlapping orthographies allow for functional separation between the languages (e.g., Goral, ; Jiang, ; van Heuven & Wen, ; but see Degani et al., ). Thus, results from same‐script multilinguals may not generalize to different‐script multilinguals.…”
Section: Background Literaturementioning
confidence: 99%
“…Nevertheless, we still obtained surprisingly high correlations between simulations and empirical data for just one parameter setting across different tasks. Van Heuven and Wen (2018) point out that different model variants should be compared against each other, and contrasting Multilink without and with lateral inhibition is one important possibility (see below).…”
Section: Multilink: Theoretical Issues and Desired Extensionsmentioning
confidence: 99%
“…In contrast to what Declerck et al (2018) suggest, the ‘cognate patch’ could not (yet) be replaced by lateral inhibition: identical cognates suffered disproportionally from lateral inhibition relative to non-identical cognates and control words. Only after suitable settings of the lateral inhibition parameter have been determined, can we start to simulate the masked translation priming studies mentioned by van Heuven and Wen (2018; e.g., Wen & van Heuven, 2017), as well as orthographic priming studies.…”
Section: Multilink: Theoretical Issues and Desired Extensionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Among the model's limitations, Mishra (2019) notes that Multilink overemphasizes lexical dimensions such as cognate status and orthographic similarity, which may be relevant for word processing in Dutch–English bilinguals, but possibly less so for speakers that use different types of orthographies and phonologies (see also Jiang, 2019). Van Heuven and Wen (2019) make similar suggestions: the need to evaluate Multilink with findings from studies involving different-script bilinguals, as the model focuses only on studies with stimuli from alphabetic languages. Tokowicz (2019) emphasizes that while Multilink does indeed address some shortcomings of previous models (BIA and BIA+), there are additional ways in which the model could be expanded, including a sharper focus on individual differences among speakers.…”
mentioning
confidence: 99%