Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1160
|View full text |Cite
|
Sign up to set email alerts
|

Towards Making a Dependency Parser See

Abstract: We explore whether it is possible to leverage eye-tracking data in an RNN dependency parser (for English) when such information is only available during training -i.e. no aggregated or token-level gaze features are used at inference time. To do so, we train a multitask learning model that parses sentences as sequence labeling and leverages gaze features as auxiliary tasks. Our method also learns to train from disjoint datasets, i.e. it can be used to test whether already collected gaze features are useful to i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…The second research area to which or work contributes is augmenting NLP models with gaze data. In this area, gaze during reading has been used for tasks such as syntactic annotation (Barrett and Søgaard, 2015a,b;Barrett et al, 2016;Strzyz et al, 2019), text compression (Klerke et al, 2016), text readability (González-Garduño and Søgaard, 2017), Named Entity Recognition (Hollenstein and Zhang, 2019), and sentiment classification (Mishra et al, 2016(Mishra et al, , 2017(Mishra et al, , 2018. Work on the first four tasks used task-independent eye-tracking corpora, primarily the Dundee corpus (Kennedy et al, 2003) and GECO (Cop et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…The second research area to which or work contributes is augmenting NLP models with gaze data. In this area, gaze during reading has been used for tasks such as syntactic annotation (Barrett and Søgaard, 2015a,b;Barrett et al, 2016;Strzyz et al, 2019), text compression (Klerke et al, 2016), text readability (González-Garduño and Søgaard, 2017), Named Entity Recognition (Hollenstein and Zhang, 2019), and sentiment classification (Mishra et al, 2016(Mishra et al, , 2017(Mishra et al, , 2018. Work on the first four tasks used task-independent eye-tracking corpora, primarily the Dundee corpus (Kennedy et al, 2003) and GECO (Cop et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Recently, an array of studies has investigated how external cognitive signals, and thus the injection of human bias, can enhance the capacity of artificial neural networks (ANNs) to understand natural language (Hollenstein et al, 2019a;Strzyz et al, 2019;Schwartz et al, 2019;Gauthier and Levy, 2019), and vice versa, how language processing in ANNs might enhance our understanding of human language processing (Hollenstein et al, 2019b). Others scrutinized whether machine attention deviates from human attention when disentangling linguistic or visual scenes (Barrett et al, 2018;Das et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Some utilized gaze features as word embeddings to inform ANNs about which syntactic linguistic features humans deem decisive in their language processing. In so doing, they have successfully refined state-of-the-art Named Entity Recognition (NER) systems (Hollenstein and Zhang, 2019), POS taggers (Barrett et al, 2016) and Dependency Parsers (Strzyz et al, 2019). Others have drawn attention to the enhancement of semantic disentanglement, and improved tasks such as sarcasm detection (Mishra et al, 2017), or sentiment analysis (Mishra et al, 2016) through leveraging human gaze.…”
Section: Related Workmentioning
confidence: 99%
“…Cognitive neuroscience, from a perspective of language processing, studies the biological and cognitive processes and aspects that underlie the mental language processing procedures in human brains while natural language processing (NLP) teaches machines to read, analyze, translate and generate human language sequences (Muttenthaler et al, 2020). The commonality of language processing shared by these two areas forms the base of cognitively-inspired NLP, which uses cognitive language processing signals generated by human brains to enhance or probe neural models in solving a variety of NLP tasks, such as sentiment analysis (Mishra et al, 2017;Barrett et al, 2018), named entity recognition (NER) (Hollenstein and Zhang, 2019), dependency parsing (Strzyz et al, 2019), relation extraction (Hollenstein et al, 2019a), etc. In spite of the success of cognitively-inspired NLP in some tasks, there are some issues in the use of cognitive features in NLP. First, for the integration of cognitive processing signals into neural models of NLP tasks, most previous studies have just directly concatenated word embeddings with cognitive features from eye-tracking or EEG, ignoring the huge differences between these two types of representations.…”
Section: Introductionmentioning
confidence: 99%