Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics 2021
DOI: 10.18653/v1/2021.cmcl-1.7
|View full text |Cite
|
Sign up to set email alerts
|

CMCL 2021 Shared Task on Eye-Tracking Prediction

Abstract: Eye-tracking data from reading represent an important resource for both linguistics and natural language processing. The ability to accurately model gaze features is crucial to advance our understanding of language processing. This paper describes the Shared Task on Eye-Tracking Data Prediction, jointly organized with the eleventh edition of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2021). The goal of the task is to predict 5 different token-level eyetracking metrics from the Zuric… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 27 publications
0
16
0
Order By: Relevance
“…Additionally, adding tasks such as eye movement and ERP prediction would be beneficial for various research communities. For example, the prediction of eye movement patterns has gained interest also in the NLP community (Hollenstein et al, 2021a). The main goal of this work is to create a platform for discussion and future research on a common benchmark task for reading task classification based on eye movement and brain activity data.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, adding tasks such as eye movement and ERP prediction would be beneficial for various research communities. For example, the prediction of eye movement patterns has gained interest also in the NLP community (Hollenstein et al, 2021a). The main goal of this work is to create a platform for discussion and future research on a common benchmark task for reading task classification based on eye movement and brain activity data.…”
Section: Discussionmentioning
confidence: 99%
“…The ZuCo dataset is publicly available and has recently been used in a variety of applications including leveraging EEG and eye-tracking data to improve NLP tasks (Barrett et al, 2018; Mathias et al, 2020; McGuire and Tomuro, 2021), evaluating the cognitive plausibility of computational language models (Hollenstein et al, 2019b; Hollenstein and Beinborn, 2021), investigating the neural dynamics of reading (Pfeiffer et al, 2020), developing models of human reading (Bautista and Naval, 2020; Bestgen, 2021). Recently, ZuCo has also been leveraged for an ML competition on eye-tracking prediction (Hollenstein et al, 2021a). This shows that the ZuCo dataset has been used successfully for a wide range of ML tasks.…”
Section: Introductionmentioning
confidence: 99%
“…On the one hand, recent work in cognitive psychology -the field that aims to answer questions about how humans think-employed natural language processing (NLP) models to investigate aspects of human language comprehension. For instance, language models and statistical parsers were used to explain language processing difficulty (Sarti et al, 2021;Rathi, 2021;Meister et al, 2022) and incrementality (Merkx & Frank, 2021;Stanojević et al, 2021), syntactic agreement processes (Ryu & Lewis, 2021), brain representations of abstract and concrete concepts (Anderson et al, 2017;Ramakrishnan & Deniz, 2021), prediction of gaze behaviour (Hollenstein et al, 2021), among others. In clinical psychology, Transformer-based language models are being used to formulate cognitive models that better explain human emotions (Guo & Choi, 2021), comprehension deficit in aphasia subjects (Guo & Choi, 2021), and even improve suicidal prevention systems (MacAvaney et al, 2021).…”
Section: Cognitive Models For Nlp Tasksmentioning
confidence: 99%
“…The task was to predict five eye-tracking features, averaged across all participants and scaled in the range between 0 and 100, for each word of a series of sentences: (1) the total number of fixations (nFix), (2) the duration of the first fixation (FFD), (3) the sum of all fixation durations, including regressions (TRT), (4) the sum of the duration of all fixations prior to progressing to the right, including regressions to previous words (GPT), and (5) the proportion of participants that fixated the word (fixProp). These dependent variables (DVs) are described in detail in Hollenstein et al (2021). The submissions were evaluated using the mean absolute error (MAE) metric and the systems were ranked according to the average MAE across all five DVs, the lowest being the best.…”
Section: Data and Taskmentioning
confidence: 99%