Abstract. Does using a brain-computer interface (BCI) influence the social interaction between people when playing a cooperative game? By measuring the amount of speech, utterances, instrumental gestures and empathic gestures during a cooperative game where two participants had to reach a certain goal, and questioning participants about their own experience afterwards this study attempts to provide answers to this question. The results showed that social interaction changed when using a BCI compared to using a mouse. There was a higher amount of utterances and empathic gestures. This indicates that the participants reacted more to the higher difficulty of the BCI selection method. Participants also reported that they felt they cooperated better during the use of the mouse.
Abstract. In human-computer interaction, it is important to offer the users correct modalities for particular tasks and situations. Unless the user has the suitable modality for a task, neither task performance nor user experience can be optimised. The aim of this study is to assess the appropriateness of using a steady-state visually evoked potential based brain-computer interface (BCI) for selection tasks in a computer game. In an experiment participants evaluated a BCI control and a comparable automatic speech recogniser (ASR) control in terms of workload, usability and engagement. The results showed that although BCI was a satisfactory modality in completing selection tasks, its use in our game was not engaging for the player. In our particular setup, ASR control appeared to be a better alternative to BCI control.
Providing multiple modalities to users is known to improve the overall performance of an interface. Weakness of one modality can be overcome by the strength of another one. Moreover, with respect to their abilities, users can choose between the modalities to use the one that is the best for them. In this paper we explored whether this holds for direct control of a computer game which can be played using a braincomputer interface (BCI) and an automatic speech recogniser (ASR). Participants played the games in unimodal mode (i.e. ASR-only and BCI-only) and multimodal mode where they could switch between the two modalities. The majority of the participants switched modality during the multimodal game but for the most of the time they stayed in ASR control. Therefore multimodality did not provide a significant performance improvement over unimodal control in our particular setup. We also investigated the factors which influence modality switching. We found that performance and peformance-related factors were prominently effective in modality switching.
The beginning of the 21st century is an exciting time for museums in terms of new, engaging and interactive exhibits. Current technological developments offer museums ideal opportunities to meet the increasing expectations of their visitors, many of whom are the younger generation growing up in the digital age. With a multitude of devices and objects as well as people incorporated into an ever-growing network of interconnected systems, new patterns, forms of interactions and social relations will emerge. In order to engage visitors, museums are adopting new technologies which come with many possibilities, but also have their individual challenges and limitations. Museums should start looking at the unification of many such technologies in order to capture visitor attention, engage visitor interaction and facilitate social activities, since the large quantity of digital input and output capabilities of these technologies are hidden potentials. However, unless specifically designed for, many of these capabilities remain hidden and technologies remain oblivious of each other's features. Making them aware of each other's capabilities opens the channels for new synergy and engaging experiences for museum visitors. This paper proposes a framework which uniquely identifies a community of people, artefacts and devices within the museum environment and provides the means to discover, and make use of the technological properties of each element, treating them as an interacting ecosystem of complex adaptive systems and networks in physical spaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.