Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.296
|View full text |Cite
|
Sign up to set email alerts
|

Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…task difficulty and enjoyment) is an open question. Alternatively to actively annotating rationales, techniques like eye-tracking might allow for passive rationale collection (Eberle et al, 2022); for example, construct a heatmap of relevant text snippets based on the annotator's gaze while performing a task.…”
Section: E Ect On Annotation Timementioning
confidence: 99%
“…task difficulty and enjoyment) is an open question. Alternatively to actively annotating rationales, techniques like eye-tracking might allow for passive rationale collection (Eberle et al, 2022); for example, construct a heatmap of relevant text snippets based on the annotator's gaze while performing a task.…”
Section: E Ect On Annotation Timementioning
confidence: 99%
“…But the irrelevant, redundant, and noisy information captured by attentions makes it hard to find meaningful patterns. Alternatively, accumulation attentions that quantify how information flows across tokens can be used for interpretation (Abnar and Zuidema, 2020;Eberle et al, 2022). However, for casual LMs, information flows in one direction, and it causes an over-smoothing problem when the model is deep.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, various attempts have been made to investigate the correlation of human attention with the machine attention of a pre-trained large language model (Eberle et al (2022), Sood et al (2020a), Bensemann et al (2022). Eberle et al (2022) highlighted the inability of cognitive models to account for the higher level cognitive activities like semantic role matching, hence motivating the use of large language models (LLMs) for modelling the human gaze. Hollenstein et al (2021) showed the efficacy of LLMs in predicting the gaze features for multiple languages, including English, Russian, Dutch and German.…”
Section: Gaze In Deep Learning-based Architecturesmentioning
confidence: 99%