1 This paper describes the OpenViBE software platform which enables to design, test and use Brain-Computer Interfaces. Brain-Computer Interfaces (BCI) are communication systems that enable users to send commands to computers only by means of brain activity. BCI are gaining interest among the Virtual Reality (VR) community since they have appeared as promising interaction devices for Virtual Environments (VE). The key features of the platform are 1) a high modularity, 2) embedded tools for visualization and feedback based on VR and 3D displays, 3) BCI design made available to non-programmers thanks to visual programming and 4) various tools offered to the different types of users. The platform features are illustrated in this paper with two entertaining VR applications based on a BCI. In the first one, users can move a virtual ball by imagining hand movements, while in the second one, they can control a virtual spaceship using real or imagined foot movements. Online experiments with these applications together with the evaluation of the platform computational performances showed its suitability for the design of VR applications controlled with a BCI. OpenViBE is a free software distributed under an open-source license.
A brain-computer interface (BCI) is a communication system that allows to control a computer or any other device thanks to the brain activity. The BCI described in this paper is based on the P300 speller BCI paradigm introduced by Farwell and Donchin . An unsupervised algorithm is proposed to enhance P300 evoked potentials by estimating spatial filters; the raw EEG signals are then projected into the estimated signal subspace. Data recorded on three subjects were used to evaluate the proposed method. The results, which are presented using a Bayesian linear discriminant analysis classifier , show that the proposed method is efficient and accurate.
Recent studies have shown how embodiment induced by multisensory bodily interactions between individuals can positively change social attitudes (closeness, empathy, racial biases). Here we use a simple neuroscience-inspired procedure to beam our human subjects into one of two distinct robots and demonstrate how this can readily increase acceptability and social closeness to that robot. Participants wore a Head Mounted Display tracking their head movements and displaying the 3D visual scene taken from the eyes of a robot which was positioned in front of a mirror and piloted by the subjects’ head movements. As a result, participants saw themselves as a robot. When participant’ and robot’s head movements were correlated, participants felt that they were incorporated into the robot with a sense of agency. Critically, the robot they embodied was judged more likeable and socially closer. Remarkably, we found that the beaming experience with correlated head movements and corresponding sensation of embodiment and social proximity, was independent of robots’ humanoid’s appearance. These findings not only reveal the ease of body-swapping, via visual-motor synchrony, into robots that do not share any clear human resemblance, but they may also pave a new way to make our future robotic helpers socially acceptable.
In this paper we present efforts for characterizing the three dimensional (3-D) movements of the right hand and the face of a French female speaker during the audiovisual production of cued speech. The 3-D trajectories of 50 hand and 63 facial flesh points during the production of 238 utterances were analyzed. These utterances were carefully designed to cover all possible diphones of the French language. Linear and nonlinear statistical models of the articulations and the postures of the hand and the face have been developed using separate and joint corpora. Automatic recognition of hand and face postures at targets was performed to verify a posteriori that key hand movements and postures imposed by cued speech had been well realized by the subject. Recognition results were further exploited in order to study the phonetic structure of cued speech, notably the phasing relations between hand gestures and sound production. The hand and face gestural scores are studied in reference with the acoustic segmentation. A first implementation of a concatenative audiovisual text-to-cued speech synthesis system is finally described that employs this unique and extensive data on cued speech in action.
BackgroundTwo experiments investigated the effect of features of human behaviour on the quality of interaction with an Embodied Conversational Agent (ECA).MethodsIn Experiment 1, visual prominence cues (head nod, eyebrow raise) of the ECA were manipulated to explore the hypothesis that likeability of an ECA increases as a function of interpersonal mimicry. In the context of an error detection task, the ECA either mimicked or did not mimic a head nod or brow raise that humans produced to give emphasis to a word when correcting the ECA’s vocabulary. In Experiment 2, presence versus absence of facial expressions on comprehension accuracy of two computer-driven ECA monologues was investigated.ResultsIn Experiment 1, evidence for a positive relationship between ECA mimicry and lifelikeness was obtained. However, a mimicking agent did not elicit more human gestures. In Experiment 2, expressiveness was associated with greater comprehension and higher ratings of humour and engagement.ConclusionInfluences from mimicry can be explained by visual and motor simulation, and bidirectional links between similarity and liking. Cue redundancy and minimizing cognitive load are potential explanations for expressiveness aiding comprehension.Electronic supplementary materialThe online version of this article (doi:10.1186/s40469-016-0008-2) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.