Proceedings of the Nineteenth Conference on Computational Natural Language Learning 2015
DOI: 10.18653/v1/k15-1038
|View full text |Cite
|
Sign up to set email alerts
|

Reading behavior predicts syntactic categories

Abstract: It is well-known that readers are less likely to fixate their gaze on closed class syntactic categories such as prepositions and pronouns. This paper investigates to what extent the syntactic category of a word in context can be predicted from gaze features obtained using eye-tracking equipment. If syntax can be reliably predicted from eye movements of readers, it can speed up linguistic annotation substantially, since reading is considerably faster than doing linguistic annotation by hand. Our results show th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
29
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 33 publications
(31 citation statements)
references
References 8 publications
2
29
0
Order By: Relevance
“…I convert the fMRI data and the eye-tracking data to vectors of aggregate statistics following the suggestions in Barrett and Søgaard (2015) and Barrett et al (2016). Table 1 presents the nearest neighbors (out of the 9 randomly selected words) in the gold data, as well as the two word embeddings.…”
Section: Sprachspiel Theorymentioning
confidence: 99%
“…I convert the fMRI data and the eye-tracking data to vectors of aggregate statistics following the suggestions in Barrett and Søgaard (2015) and Barrett et al (2016). Table 1 presents the nearest neighbors (out of the 9 randomly selected words) in the gold data, as well as the two word embeddings.…”
Section: Sprachspiel Theorymentioning
confidence: 99%
“…The data comes from (Barrett and Søgaard, 2015) and is publicly available 1 . In this experiment 10 native English speakers read 250 syntactically annotated sentences in English (min.…”
Section: Eye Tracking Datamentioning
confidence: 99%
“…Recently, Barrett and Søgaard (2015) presented evidence that gaze features can be used to discriminate between most pairs of parts of speech (POS). Their study uses all the coarse-grained POS labels proposed by Petrov et al (2011).…”
Section: Introductionmentioning
confidence: 99%
“…Eye trackers and gaze features collected from them have been recently applied to natural language processing (NLP) tasks, such as part-ofspeech tagging (Duffy et al, 1988;Nilsson and Nivre, 2009;Barrett and Søgaard, 2015a), namedentity recognition (Tokunaga et al, 2017) or readability (González-Garduño and Søgaard, 2018). Eye-movement data has been also used for parsing.…”
Section: Introductionmentioning
confidence: 99%
“…However, even if in the near future every user has an eye tracker on top of their screen -a scenario which is far from guaranteed, and raises privacy concerns (Liebling and Preibusch, 2014)many running NLP applications that process data from various Internet sources will not expect to have any human being reading massive amounts of data. Other studies (Barrett and Søgaard, 2015a; instead derive gaze features from the training set: in particular, they collect type-level gaze features from the vocabulary in the training set; which are then aggregated to create a lookup table and used as a sort of precomputed gaze input when a given word in the test set matches an entry, otherwise, a token has an unknown gaze feature value. In this manner, the influence of gaze features on unseen data depends on the vocabulary encountered during training.…”
Section: Introductionmentioning
confidence: 99%