Proceedings of the 2019 ACM Southeast Conference 2019
DOI: 10.1145/3299815.3314424
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Students' Attention in the Classroom using Eyetrackers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…In existing research, video-based methods extract learners' eye gaze and head pose data as indicators of attention. Veliyath et al [15] and Daniel et al [24] used eye gaze data to recognize concentration; they extracted gaze position, task location, gaze duration, gaze rate, gaze count, and other variables as effective evaluation indicators of concentration during the learning process. In video-based methods, head pose can also serve as an indicator of learners' concentration.…”
Section: Concentration Recognition Based On Interaction and Vision Datamentioning
confidence: 99%
See 1 more Smart Citation
“…In existing research, video-based methods extract learners' eye gaze and head pose data as indicators of attention. Veliyath et al [15] and Daniel et al [24] used eye gaze data to recognize concentration; they extracted gaze position, task location, gaze duration, gaze rate, gaze count, and other variables as effective evaluation indicators of concentration during the learning process. In video-based methods, head pose can also serve as an indicator of learners' concentration.…”
Section: Concentration Recognition Based On Interaction and Vision Datamentioning
confidence: 99%
“…Concentration is identified from the emotional dimension by extracting features such as facial expressions [11], text [12], and posture [13]. Behavioral aspects of concentration are evaluated through data collected from learners' clickstream [14], eye gaze [15], and other behaviors. The above approaches generally fall into two categories: computer interaction data and computer vision data.…”
Section: Introductionmentioning
confidence: 99%
“…Some prior studies have proposed interaction techniques centering on the context of participants' attention [16,39,51], as it has been pointed out that people often have difficulty maintaining their attention during video-based communication [29,30]. These techniques benefit from the significant effort that has been devoted to estimating participants' attentiveness based on visual cues, such as face movement [42], body postures [54], and gaze [7,24,45]. They then use the estimation results to enhance learners' performance, for instance in the case of video-based learning, as it is widely acknowledged that learners' attention and engagement are strongly related to their learning performance [4,16].…”
Section: Attention-related Interaction Techniques For Video-based Lea...mentioning
confidence: 99%
“…In addition, the accuracy of the machine learning-based sensing module in the second experiment can be improved using the latest techniques [24,42,45,54]. In this study, we used a naïve approach based on head pose to investigate the effect of the proposed approach with false-positive detection.…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Frasson, 2011a, 2011b;Abujayyab et al, 2017), and eyetracker (e.g. Veliyath et al, 2019), these types of researches can't be widely used for attention detection within asynchronous e-learning environments as it need special equipments. The second category focused on (ii) analyzing the data generated from the interaction between the learner and the learning system (e.g., de Vicente, 2003;Qu.…”
Section: Introductionmentioning
confidence: 99%