2022
DOI: 10.3758/s13428-022-01842-3
|View full text |Cite
|
Sign up to set email alerts
|

From eye movements to scanpath networks: A method for studying individual differences in expository text reading

Abstract: Eye movements have been examined as an index of attention and comprehension during reading in the literature for over 30 years. Although eye-movement measurements are acknowledged as reliable indicators of readers’ comprehension skill, few studies have analyzed eye-movement patterns using network science. In this study, we offer a new approach to analyze eye-movement data. Specifically, we recorded visual scanpaths when participants were reading expository science text, and used these to construct scanpath net… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(15 citation statements)
references
References 95 publications
0
8
0
Order By: Relevance
“…Two fMRI datasets about sentence reading were used in this study: the Mars subset of the Reading Brain project ( 48 – 50 ) in which the sentences are connected to make a coherent story and the dataset from Pereira et al ( 1 ) in which the stimuli are dominated by unconnected relationships (see Materials and Methods for details). We refer to the two datasets as the “Reading-brain2019 dataset” and the “Pereira2018 dataset.” To identify the contributions of NSP to model-brain alignment, we trained two types of models using the BERT architecture: the MLM model that performed only the MLM task and the MLM_NSP model that performed both the MLM (for word prediction) and NSP (for sentence coherence prediction/evaluation) tasks.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Two fMRI datasets about sentence reading were used in this study: the Mars subset of the Reading Brain project ( 48 – 50 ) in which the sentences are connected to make a coherent story and the dataset from Pereira et al ( 1 ) in which the stimuli are dominated by unconnected relationships (see Materials and Methods for details). We refer to the two datasets as the “Reading-brain2019 dataset” and the “Pereira2018 dataset.” To identify the contributions of NSP to model-brain alignment, we trained two types of models using the BERT architecture: the MLM model that performed only the MLM task and the MLM_NSP model that performed both the MLM (for word prediction) and NSP (for sentence coherence prediction/evaluation) tasks.…”
Section: Resultsmentioning
confidence: 99%
“…At the individual difference level, our study showed that model-brain alignment computed with the MLM_NSP and MLM models was negatively correlated with reading time, suggesting that greater alignment between brains and models may be associated with faster reading. Reading time is one of the critical components for assessing reading skills ( 67 69 ), which has been used to differentiate skilled and less skilled readers during discourse comprehension ( 50 ). Skilled readers, compared with less skilled readers, may be more efficient in selecting and organizing key contents to construct and integrate the mental representation, thus giving rise to quicker reading time ( 50 , 70 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…According to the "eye-mind" hypothesis [31], eye movement is related to a user's attention being directed by the interface. By de ning areas of interest on certain parts of the system interface [32], combined with sequence of xations, researchers can analyze the paths of the eye movements and derive meaningful information [33][34][35][36]. Therefore, this study employed an eye tracker to collect eye movement data and use it to nd users' search tactics and strategies with metadata.…”
Section: Research Backgroundmentioning
confidence: 99%
“…In the case of silent reading, gazing at one spot for a long time occurs for words and characters that are difficult to read (difficult-to-read characters) [1,2]. Therefore, gaze may be used to automatically detect difficult-to-read characters.…”
Section: Introductionmentioning
confidence: 99%