2021
DOI: 10.3390/info12080329
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Systematicity in Connectionist Language Production

Abstract: Decades of studies trying to define the extent to which artificial neural networks can exhibit systematicity suggest that systematicity can be achieved by connectionist models but not by default. Here we present a novel connectionist model of sentence production that employs rich situation model representations originally proposed for modeling systematicity in comprehension. The high performance of our model demonstrates that such representations are also well suited to model language production. Furthermore, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…John & McClelland, 1990). Our goal thus differs from modeling efforts that explicitly attempt to map linguistic input onto situation knowledge (e.g., Calvillo, Brouwer, & Crocker, 2016). We stress this point because we use terminology such as agent, patient, and instrument in describing activities that is also used to describe linguistic constructs.…”
Section: Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…John & McClelland, 1990). Our goal thus differs from modeling efforts that explicitly attempt to map linguistic input onto situation knowledge (e.g., Calvillo, Brouwer, & Crocker, 2016). We stress this point because we use terminology such as agent, patient, and instrument in describing activities that is also used to describe linguistic constructs.…”
Section: Architecturementioning
confidence: 99%
“…Each unit represents a concept. Localist representations are useful for expository purposes, although the learning algorithm can be used with distributed representations as well (Calvillo et al, 2016;Frank, Koppen, Noordman, & Vonk, 2003). The architecture in Figure 1 was used in all simulations reported herein, although simulations differ in their specific participants, processes, and contexts.…”
Section: Architecturementioning
confidence: 99%
“…For instance, a neural network model of language comprehension, similar to the one presented above, but employing meaning representations derived from an earlier formulation of the DFS framework (see [25]), has been used to successfully model the interaction between linguistic experience and world knowledge in comprehension [28]. Moreover, models employing such meaning representations have been shown to naturally capture inference and quantification [31], and generalize to unseen sentences and semantics, in both comprehension [31] and production [53]. Here, we have extended these results by showing how they capture phenomena such as negation, presupposition, and anaphoricity.…”
Section: Dfs In Cognitive Models Of Language Processingmentioning
confidence: 99%
“…There has been substantially less work on models of sentence production than sentence comprehension. It may seem straightforward to construct a production model by running a sentence comprehension model backwards, and this is indeed how two recent connectionist models of production were developed (Calvillo, Brouwer, & Crocker, 2016;Hinaut et al, 2015). However, the most successful and empirically validated sentence production models were specifically designed to simulate production.…”
Section: Sentence Productionmentioning
confidence: 99%