2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8594045
|View full text |Cite
|
Sign up to set email alerts
|

Free-View, 3D Gaze-Guided, Assistive Robotic System for Activities of Daily Living

Abstract: Patients suffering from quadriplegia have limited body motion which prevents them from performing daily activities. We have developed an assistive robotic system with an intuitive free-view gaze interface. The user's point of regard is estimated in 3D space while allowing free head movement and is combined with object recognition and trajectory planning. This framework allows the user to interact with objects using fixations. Two operational modes have been implemented to cater for different eventualities. The… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(23 citation statements)
references
References 17 publications
(22 reference statements)
0
18
0
Order By: Relevance
“…A high observational latency will degrade the fluency of a human-robot system and increase the operator's cognitive burden, effort, and frustration levels. A user interface that requires operators to intentionally gaze at specific objects or regions for a fixed period of time may be less natural and have lower fluency than a user interface that leverages natural eye gaze behaviors (Li et al, 2017 ; Wang et al, 2018 ).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…A high observational latency will degrade the fluency of a human-robot system and increase the operator's cognitive burden, effort, and frustration levels. A user interface that requires operators to intentionally gaze at specific objects or regions for a fixed period of time may be less natural and have lower fluency than a user interface that leverages natural eye gaze behaviors (Li et al, 2017 ; Wang et al, 2018 ).…”
Section: Discussionmentioning
confidence: 99%
“…As described in the above example, we envision that our model could be used to recognize subjects' intended action primitives through their natural eye gaze movements while the robot handles the planning and control details necessary for implementation. In contrast to some state-of-the-art approaches to commanding robot movements (Li and Zhang, 2017 ; Wang et al, 2018 ; Shafti et al, 2019 ; Zeng et al, 2020 ), subjects would not be forced to unnaturally, intentionally fixate their gaze at target objects in order to trigger pre-programmed actions. Of course, much work is necessary to implement the proposed shared autonomy control scheme and this is the subject of future work.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the context of assistive robotics, a number of other studies have also considered gaze information (at times combined with multimodal interfaces, such as BCI and haptic feedback) to operate robotic limbs and wheelchairs (Schettino and Demiris, 2019;Zeng et al, 2020). Often in these cases, the gaze is used to implicitly but actively point the system to the object the impaired user wants the robot to interact with Wang et al, 2018;Shafti et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…The specific nature of our setup makes it difficult to compare our system to other approaches presented in the literature, since often either natural eye-hand coordination (Haji Fathaliyan et al, 2018;Wang et al, 2020) or no eye-hand coordination at all (Huang and Mutlu, 2016) is used for intention recognition (see section 2). In teleoperation, especially in the context of assistive technologies, the user is often required to actively fixate the object of interest for a certain amount of time in order to trigger an associated action (Wang et al, 2018;Cio et al, 2019;Shafti et al, 2019). We compare here our system to such approaches, to verify the advantage of a probabilistic framework over a deterministic, sensory-driven one.…”
Section: Comparison To Active Fixation-based Approachesmentioning
confidence: 99%