Noninvasive augmented-reality (AR) brain-computer interfaces (BCIs) that
use steady-state visually evoked potentials (SSVEPs) typically adopt a
fully autonomous goal-selection framework to control a robot, where
automation is used to compensate for the low information transfer rate
of the BCI. This scheme improves task performance but users may prefer
direct control (DC) of robot motion. To provide users with a balance of
autonomous assistance and manual control, we developed a shared control
(SC) system for continuous control of robot translation using an SSVEP
AR-BCI, which we tested in a 3D reaching task. The SC system used the
BCI input and robot sensor data to continuously predict which object the
user wanted to reach, generated an assistance signal, and regulated the
level of assistance based on prediction confidence. Eighteen healthy
participants took part in our study and each completed 24 reaching
trials using DC and SC. Compared to DC, SC significantly improved
(paired two-tailed t-test, Holm-corrected α<0.05) mean task
success rate (p<0.0001, µ=36.1%, 95% CI [25.3%,
46.9%]), normalised reaching trajectory length (p<0.0001,
µ=-26.8%, 95% CI [-36.0%,-17.7%]), and participant workload
(p<0.02, µ=-11.6, 95% CI [-21.1,-2.0]) measured with the
NASA Task Load Index. Therefore, users of SC can control the robot
effectively, while experiencing increased agency. Our system can
personalise assistive technology by providing users with the ability to
select their preferred level of autonomous assistance.