Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move again and regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate this to simpler, higher level commands that are easy and intuitive for a human user to interact with. We have created a multimodal system, consisting of different sensing, decision making and actuating modalities, leading to intuitive, human-in-theloop assistive robotics. The system takes its cue from the user's gaze, to decode their intentions and implement low-level motion actions to achieve high-level tasks. This results in the user simply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We present our method for 3D gaze estimation, and grammarsbased implementation of sequences of action with the robotic system. The 3D gaze estimation is evaluated with 8 subjects, showing an overall accuracy of 4.68±0.14cm. The full system is tested with 5 subjects, showing successful implementation of 100% of reach to gaze point actions and full implementation of pick and place tasks in 96%, and pick and pour tasks in 76% of cases. Finally we present a discussion on our results and what future work is needed to improve the system. Fig. 3: Geometric representation of the gaze angle and the camera frame, used for the 3D gaze point estimation.
The methodology of eye tracking has been gradually making its way into various fields of science, assisted by the diminishing cost of the associated technology. In an international collaboration to open up the prospect of eye movement research for programming educators, we present a case study on program comprehension and preliminary analyses together with some useful tools.The main contributions of this paper are (1) an introduction to eye tracking to study programmers; (2) an approach that can help elucidate how novices learn to read and understand programs and to identify improvements to teaching and tools; (3) a consideration of data analysis methods and challenges, along with tools to address them; and (4) some larger computing education questions that can be addressed (or revisited) in the context of eye tracking.
A large dataset that contains the eye movements of N=216 programmers of different experience levels captured during two code comprehension tasks is presented. Data are grouped in terms of programming expertise (from none to high) and other demographic descriptors. Data were collected through an international collaborative effort that involved eleven research teams across eight countries on four continents. The same eye tracking apparatus and software was used for the data collection. The Eye Movements in Programming (EMIP) dataset is freely available for download. The varied metadata in the EMIP dataset provides fertile ground for the analysis of gaze behavior and may be used to make novel insights about code comprehension.
Existing wheelchair control interfaces, such as sip & puff or screen based gaze-controlled cursors, are challenging for the severely disabled to navigate safely and independently as users continuously need to interact with an interface during navigation. This puts a significant cognitive load on users and prevents them from interacting with the environment in other forms during navigation. We have combined eyetracking/ gazecontingent intention decoding with computer vision contextaware algorithms and autonomous navigation drawn from selfdriving vehicles to allow paralysed users to drive by eye, simply by decoding natural gaze about where the user wants to go: A.Eye Drive. Our "Zero UI" driving platform allows users to look and interact visually with at an object or destination of interest in their visual scene, and the wheelchair autonomously takes the user to the intended destination, while continuously updating the computed path for static and dynamic obstacles. This intention decoding technology empowers the enduser by promising more independence through their own agency.
Dashboards are an important field for an investigation as they are the visual part of the management information systems. Our study aimed to find out the effects of the changes in the types and numbers of line graphs that can be displayed on a single screen simultaneously. Two laboratory experiments were conducted using an eye-tracker to find out how subjects' perception of line graphs on dashboards changes with increase in graph numbers, changes in sizes and increase of the overall area taken up by the graphs on the screen. We show that if the graphs take up the same area, the subjects perceive the line graphs displayed simultaneously in a similar manner. If the subjects are shown an increasing number of graphs of the same size, the subjects take longer to respond to tasks and have higher fixation count per each stimuli. The study revealed that there is no correlation between graph's slope (trend) and subjects' perception.Orlov, P., Ermolova, T., Laptev, V., Mitrofanov, A. and Ivanov, V. The Eye-tracking Study of the Line Charts in Dashboards Design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.