2021
DOI: 10.48550/arxiv.2104.05433
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multilingual Language Models Predict Human Reading Behavior

Abstract: We analyze if large language models are able to predict patterns of human reading behavior. We compare the performance of language-specific and multilingual pretrained transformer models to predict reading time measures reflecting natural human sentence processing on Dutch, English, German, and Russian texts. This results in accurate models of human reading behavior, which indicates that transformer models implicitly encode relative importance in language in a way that is comparable to human processing mechani… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 29 publications
(32 reference statements)
0
1
0
Order By: Relevance
“…Eberle et al (2022) highlighted the inability of cognitive models to account for the higher level cognitive activities like semantic role matching, hence motivating the use of large language models (LLMs) for modelling the human gaze. Hollenstein et al (2021) showed the efficacy of LLMs in predicting the gaze features for multiple languages, including English, Russian, Dutch and German. Barrett et al (2018) used natural reading eye-tracking corpus for regularizing attention function in a multi-task setting.…”
Section: Gaze In Deep Learning-based Architecturesmentioning
confidence: 99%
“…Eberle et al (2022) highlighted the inability of cognitive models to account for the higher level cognitive activities like semantic role matching, hence motivating the use of large language models (LLMs) for modelling the human gaze. Hollenstein et al (2021) showed the efficacy of LLMs in predicting the gaze features for multiple languages, including English, Russian, Dutch and German. Barrett et al (2018) used natural reading eye-tracking corpus for regularizing attention function in a multi-task setting.…”
Section: Gaze In Deep Learning-based Architecturesmentioning
confidence: 99%