2024
DOI: 10.1016/j.jml.2024.104510
|View full text |Cite
|
Sign up to set email alerts
|

Large-scale benchmark yields no evidence that language model surprisal explains syntactic disambiguation difficulty

Kuan-Jung Huang,
Suhas Arehalli,
Mari Kugemoto
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
0
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 61 publications
0
0
0
Order By: Relevance
“…1 ), we found the inverse when classifying subtypes of aphasia (Table 4 ). This leads us to critically examine the view that larger size LLMs can be superior to their smaller counterparts 47 , 76 – 78 . It is worth reconsidering the supremacy of larger LLMs.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…1 ), we found the inverse when classifying subtypes of aphasia (Table 4 ). This leads us to critically examine the view that larger size LLMs can be superior to their smaller counterparts 47 , 76 – 78 . It is worth reconsidering the supremacy of larger LLMs.…”
Section: Discussionmentioning
confidence: 99%
“…This is possibly because next word prediction within a sentence, a pre-training task shared by all the LLMs in our experiments, is not sufficient to capture the complex and subtle linguistic patterns in aphasia. LLMs-surprisals can complement the existing language features 47 .…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation