2016
DOI: 10.1155/2016/4909685
|View full text |Cite
|
Sign up to set email alerts
|

Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI

Abstract: Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(21 citation statements)
references
References 40 publications
0
21
0
Order By: Relevance
“…LEDs were used to generate the stimuli. Similarly, the BCI system proposed by Stawicki et al (2016) has live video feedback and allows to control the camera and MRC. When you are in driver mode, you have the options to go forward, turn left, turn right and change to camera mode; in camera mode, you have the possibility to look to the left, look to the right, look up and change to driver mode.…”
Section: Unmanned Ground Vehiclesmentioning
confidence: 99%
See 3 more Smart Citations
“…LEDs were used to generate the stimuli. Similarly, the BCI system proposed by Stawicki et al (2016) has live video feedback and allows to control the camera and MRC. When you are in driver mode, you have the options to go forward, turn left, turn right and change to camera mode; in camera mode, you have the possibility to look to the left, look to the right, look up and change to driver mode.…”
Section: Unmanned Ground Vehiclesmentioning
confidence: 99%
“…It reflects that among the 40 reviewed articles, only Liu et al (2018) does not show concise information in this aspect. The research in which the largest number of experimental subjects has participated was in Stawicki et al (2016) with 61 people and it was the article that obtained one of the best results (with an accuracy of 93.03%). The articles that have tested with the smallest number of subjects were in Gonzalez-Mendoza et al (2015) and Yang et al (2017) with 2 people each.…”
Section: Analysis Of Technical Specifications Of Previous Work Methomentioning
confidence: 99%
See 2 more Smart Citations
“…When the subject looks at a given command on the screen, the associated stimulus frequency is detected in the EEG signal and the command is triggered. Stawicki et al [48] follow the same approach, using a screen, but illustrate the commands in an interface based on the subjective view of the robot, generated by a camera located on the mobile robot itself. A slightly more sophisticated approach consists of introducing an avatar to represent the possible actions [13].…”
Section: State Of the Artmentioning
confidence: 99%