Proceedings of the 11th ACM Symposium on Eye Tracking Research &Amp; Applications 2019
DOI: 10.1145/3314111.3319917
|View full text |Cite
|
Sign up to set email alerts
|

Visually analyzing eye movements on natural language texts and source code snippets

Abstract: In this paper, we analyze eye movement data of 26 participants using a quantitative and qualitative approach to investigate how people read natural language text in comparison to source code. In particular, we use the radial transition graph visualization to explore strategies of participants during these reading tasks and extract common patterns amongst participants. We illustrate via examples how visualization can play a role at uncovering behavior of people while reading natural language text versus source … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…We find that attention-based methods do carry predictive power, and in particular that follow-up attention performs best among all methods for both Spearman rank and top-3 overlap (Figure 6). We note that a purely position-based approach, inspired by findings of a linear order when reading source code (Blascheck & Sharif, 2019), performs better than the copy-cat method despite being completely agnostic to the actual content of the code. Ablation study of follow-up attention.…”
Section: Developers Vs Neural Modelsmentioning
confidence: 85%
See 1 more Smart Citation
“…We find that attention-based methods do carry predictive power, and in particular that follow-up attention performs best among all methods for both Spearman rank and top-3 overlap (Figure 6). We note that a purely position-based approach, inspired by findings of a linear order when reading source code (Blascheck & Sharif, 2019), performs better than the copy-cat method despite being completely agnostic to the actual content of the code. Ablation study of follow-up attention.…”
Section: Developers Vs Neural Modelsmentioning
confidence: 85%
“…An eye tracking dataset with 216 participants was collected by Bednarik et al (2020), however they only consider two short snippets (11-22 LoC) of code, since they do not support scrolling. Blascheck & Sharif (2019) and Busjahn et al (2015) instead studied the reading order in C++ and Java code comprehension task focusing on six small programs that could fit into a single screen, whereas we consider longer snippets and a much larger dataset of 45 unique tasks. Sharafi et al (2022) have recently studied code navigation strategies on Java code with eye tracking involving 36 participants focussing on the bug fixing process, whose specific goal and might elicit a different kind of reasoning compared to our sense-making tasks.…”
Section: B3 Eye-tracking Studiesmentioning
confidence: 99%
“…T. Blascheck et al collected the state-of-the-art visualization techniques used in eye tracking [12]. According to this review and some other reports of the same author, a handy technique to visualize scanpaths is a circular heat map [16,17]. It is a graph-based representation, in which AOIs are drawn as nodes in a circle, and saccades are depicted as arcs between them.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we collect data on analytic activities and insights from two analysts applying two different visualization techniques to inspect the same dataset. We re-analyzed the data from a study comparing how novices and non-novices read natural language text (NT) and source code (SC) (21 participants × 2 conditions ∈ {NT, SC} × 4 stimuli per condition = 168 AOI visit sequences) [Blascheck and Sharif 2019]. The AOIs correspond to each line of text or source code.…”
Section: Analysis Proceduresmentioning
confidence: 99%