2022
DOI: 10.48550/arxiv.2202.12246
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural reality of argument structure constructions

Abstract: In lexicalist linguistic theories, argument structure is assumed to be predictable from the meaning of verbs. As a result, the verb is the primary determinant of the meaning of a clause. In contrast, construction grammarians propose that argument structure is encoded in constructions (or form-meaning pairs) that are distinct from verbs. Decades of psycholinguistic research have produced substantial empirical evidence in favor of the construction view. Here we adapt several psycholinguistic studies to probe for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…They find, through manual analysis, that while most of these resultant groupings are indeed constructional, some of them might be too simplistic. Li et al (2022) similarly explore the extent to which language models have access to constructional information and more specifically, argument structure constructions using stimuli generated from templates. The authors thus adapt several psycholinguistic studies to Transformer-based language models.…”
Section: # Of Cxsmentioning
confidence: 99%
See 2 more Smart Citations
“…They find, through manual analysis, that while most of these resultant groupings are indeed constructional, some of them might be too simplistic. Li et al (2022) similarly explore the extent to which language models have access to constructional information and more specifically, argument structure constructions using stimuli generated from templates. The authors thus adapt several psycholinguistic studies to Transformer-based language models.…”
Section: # Of Cxsmentioning
confidence: 99%
“…In an effort to simulate language acquisition (through language exposure), Li et al (2022) use different language sample sizes and find that the more input the models get, the more likely they are to group sentences together based on their shared constructional pattern rather than their shared verb. Notably they show that RoBERTa (Y. , for example, seems to generalize meaning without lexical overlap from different constructions.…”
Section: # Of Cxsmentioning
confidence: 99%
See 1 more Smart Citation