2016 IEEE International Conference on Robotics and Automation (ICRA) 2016
DOI: 10.1109/icra.2016.7487502
|View full text |Cite
|
Sign up to set email alerts
|

3D gaze cursor: Continuous calibration and end-point grasp control of robotic actuators

Abstract: Abstract-Eye movements are closely related to motor actions, and hence can be used to infer motor intentions. Additionally, eye movements are in some cases the only means of communication and interaction with the environment for paralysed and impaired patients with severe motor deficiencies. Despite this, eye-tracking technology still has a very limited use as a human-robot control interface and its applicability is highly restricted to 2D simple tasks that operate on screen based interfaces and do not suffice… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
22
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 21 publications
(22 citation statements)
references
References 16 publications
0
22
0
Order By: Relevance
“…Here, instead we solve the Midas Touch Problem using a binocular eye-tracker to detect wink-based commands, thereby eliminating dwell times or long blinking. We previously demonstrated that binocular eye-tracking control enables real-time closed-loop control that outperforms invasive (and non-invasive) brainmachine interfaces in terms of cost and read-out data rates [12] for continuous 3D end-point control of robot actuators [25] and/or for free-gaze navigation of wheel-chairs [26].…”
Section: Discussionmentioning
confidence: 99%
“…Here, instead we solve the Midas Touch Problem using a binocular eye-tracker to detect wink-based commands, thereby eliminating dwell times or long blinking. We previously demonstrated that binocular eye-tracking control enables real-time closed-loop control that outperforms invasive (and non-invasive) brainmachine interfaces in terms of cost and read-out data rates [12] for continuous 3D end-point control of robot actuators [25] and/or for free-gaze navigation of wheel-chairs [26].…”
Section: Discussionmentioning
confidence: 99%
“…These results sit at one end of the spectrum of solutions for controlling an augmentative device, which goes from substitution all the way to direct augmentation via higher level control, either brain-machine-interfacing or cognitive interfaces such as eye gaze decoding. We previously showed that the end-point of visual attention (where one looks) can control the spatial end-point of a robotic actuator with centimetre-level precision (Tostado et al, 2016;Maimon-Mor et al, 2017;Shafti et al, 2019b). This direct control modality is more effective from a user perspective than voice or neuromuscular signals as a natural control interface (Noronha et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…In our case, we demonstrated that our FastOrient algorithm works out-out-of-the-box which does not require a complex training learning pipeline and is sufficient to allow a natural wrist and grasp interaction. With the technology that we have demonstrated, we should be able to use gaze-based 3D positioning of the arm for grasping, as demonstrated in [4]- [6], wink-based detection of grasping intention [7], and now automatic processing of the grasp orientation based on the findings of this paper, without the requirement for the user to fine-tune the orientation of the hand, resulting in a full, intuitive assistive system. We have demonstrated FastOrient across 5 different realistic surfaces and for 26 different objects pertaining to standard activities of daily living.…”
Section: Discussionmentioning
confidence: 99%
“…Multi-modal systems relying on unaffected abilities, e.g. the use of gaze-based robotic end-point control for reaching assistance [4]- [6], or the use wearable robotics controlled through eye winks and voice [7]. In all these cases, a suitable eventual grasp is only possible with the correct orientation of the hand; be it the human hand using an orthotic ( Figure 1-left), or a tele-operated robotic hand, performing the grasp (Figure 1-right).…”
Section: Introductionmentioning
confidence: 99%