Intention decoding is an indispensable procedure in hands-free human-computer interaction (HCI). Conventional eye-tracking system using single-model fixation duration possibly issues commands ignoring users' real expectation. In the current study, an eye-brain hybrid brain-computer interface (BCI) interaction system was introduced for intention detection through fusion of multi-modal eye-track and ERP (a measurement derived from EEG) features. Eye-track and EEG data were recorded from 64 healthy participants as they performed a 40-min customized free search task of a fixed target icon among 25 icons. The corresponding fixation duration of eye-tracking and ERP were extracted. Five previously-validated LDA-based classifiers (including RLDA, SWLDA, BLDA, SKLDA, and STDA) and the widely-used CNN method were adopted to verify the efficacy of feature fusion from both offline and pseudo-online analysis, and optimal approach was evaluated through modulating the training set and system response duration. Our study demonstrated that the input of multi-modal eye-track and ERP features achieved superior performance of intention detection in the single trial classification of active search task. And compared with singlemodel ERP feature, this new strategy also induced congruent accuracy across different classifiers. Moreover, in comparison with other classification methods, we found that the SKLDA exhibited the superior performance when fusing feature in offline test (ACC=0.8783, AUC=0.9004) and online simulation with different sample amount and duration length. In sum, the current study revealed a novel and effective approach for intention classification using eye-brain hybrid BCI, and further supported the real-life application of hands-free HCI in a more precise and stable manner.
ObjectiveWith the increasing amount of information presented on current human–computer interfaces, eye‐controlled highlighting has been proposed, as a new display technique, to optimise users’ task performances. However, it is unknown to what extent the eye‐controlled highlighting display facilitates visual search performance. The current study examined the facilitative effect of eye‐controlled highlighting display technique on visual search with two major attributes of visual stimuli: stimulus type and the visual similarity between targets and distractors.MethodIn Experiment 1, we used digits and Chinese words as materials to explore the generalisation of the facilitative effect of the eye‐controlled highlighting. In Experiment 2, we used Chinese words to examine the effect of target‐distractor similarity on the facilitation of eye‐controlled highlighting display.ResultsThe eye‐controlling highlighting display improved visual search performance when words were used as searching target and when the target‐distractor similarity was high. No facilitative effect was found when digits were used as searching target or target‐distractor similarity was low.ConclusionsThe effectiveness of the eye‐controlled highlighting on a visual task was influenced by both stimulus type and target‐distractor similarity. These findings provided guidelines for modern interface design with eye‐based displays implemented.
Visual search is ubiquitous in daily life and has attracted substantial research interest over the past decades. Although accumulating evidence has suggested complex neurocognitive processes underlying visual search, the neural communication across the brain regions remains poorly understood. The present work aimed to fill this gap by investigating functional networks of fixation-related potential (FRP) during the visual search task. Multi-frequency electroencephalogram (EEG) networks were constructed from 70 university students (male/female = 35/35) using FRPs time-locked to target and nontarget fixation onsets, which were determined by concurrent eyetracking data. Then graph theoretical analysis (GTA) and a datadriven classification framework were employed to quantitatively reveal the divergent reorganization between target and nontarget FRPs. We found distinct network architectures between target and non-target mainly in the delta and theta bands. More importantly, we achieved a classification accuracy of 92.74% for target and non-target discrimination using both global and nodal network features. In line with the results of GTA, we found that the integration corresponding to target and non-target FRPs significantly differed, while the nodal features contributing most to classification performance primarily resided in the occipital and parietal-temporal areas. Interestingly, we revealed that females exhibited significantly higher local efficiency in delta band when focusing on the search task. In summary, these results provide some of the first quantitative insights into the underlying brain interaction patterns during the visual search process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.