Proceedings of Computer Graphics International 2018 2018
DOI: 10.1145/3208159.3208180
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Eye-Gaze Based Interaction for Human Intention Prediction and Emotion Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…They can provide information about user behavior and states. Eye gaze is a functional interaction to generate patterns for identifying the cognitive information of the users (He et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…They can provide information about user behavior and states. Eye gaze is a functional interaction to generate patterns for identifying the cognitive information of the users (He et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Gaze has been used successfully in the past as an interface for machines, particularly in human-computer interfaces [17], [18] and social robotics to monitor human attention, emotion and engagement [19]- [22] as well as robotic laparoscopic surgery [23]. When it comes to patients with movement disabilities, there is work on the use of gaze patterns in rehabilitation [24], for the control of 2 degrees of freedom in upper limb exoskeletons, where the patient uses gaze to direct the robot on a 2D surface.…”
Section: Introductionmentioning
confidence: 99%
“…Eye gaze based systems have been examined for interface control [1] and evaluation [2,3] since the 1980s, shortly after computer technology advanced to the point of making the creation of such systems possible. Gaze-based interaction has many applications in a range of fields such as engineering and human-computer interaction and offers many opportunities as well as challenges for research [4,5]. Users will generally look at what they wish to interact with, and even reliably do so before the target is reached using conventional mouse movement [6].…”
Section: Introductionmentioning
confidence: 99%