2020
DOI: 10.1109/mprv.2020.2967736
|View full text |Cite
|
Sign up to set email alerts
|

How Far Are We From Quantifying Visual Attention in Mobile HCI?

Abstract: With an ever-increasing number of mobile devices competing for our attention, quantifying when, how often, or for how long users visually attend to their devices has emerged as a core challenge in mobile human-computer interaction. Encouraged by recent advances in automatic eye contact detection using machine learning and device-integrated cameras, we provide a fundamental investigation into the feasibility of quantifying visual attention during everyday mobile interactions. We identify core challenges and sou… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…Additionally, in the absence of the instructor's expression, the attendees can give different expressions irrespective of engaged or non-engaged. To address these limitations, some studies explores the gaze-based visual attention [7,8,25,29] for finding the attentiveness of the attendees. In [8], Bace et al quantified the visual attention by checking whether the attendee was looking at the frontal screen.…”
Section: Dedicated Device-based Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, in the absence of the instructor's expression, the attendees can give different expressions irrespective of engaged or non-engaged. To address these limitations, some studies explores the gaze-based visual attention [7,8,25,29] for finding the attentiveness of the attendees. In [8], Bace et al quantified the visual attention by checking whether the attendee was looking at the frontal screen.…”
Section: Dedicated Device-based Approachmentioning
confidence: 99%
“…On the other side, Table 1 shows the average responses for the questions focuses on the evaluation of the platform and the understandability of the student by the instructor as well as well-accustomed participants 7 , respectively.…”
Section: In-the-wild Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, more and more researchers began to study the problem of visual LAG recognition. Bce et al study the problem of students' LAG recognition in a small round table environment, in which an omnidirectional camera is placed on the conference table [9]. Later, they studied the method of identifying LAG in the environment of multiple remote cameras [4].…”
Section: Related Workmentioning
confidence: 99%
“…reading, walking, detection of fatigue, cognitive load) enable applications on areas like quantified self for the mind [16,19]. And with increasing ubiquity of the technology, new opportunities arise for applications that track social behaviours and interactions between groups of people in real-world settings [3,23]. The workshop is a continuation of the first three EyeWear workshops at UbiComp (2016 in Heidelberg, 2018 in Singapore, and 2019 in London).…”
Section: Introductionmentioning
confidence: 99%