Abstract:This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, s… Show more
“…Many previous studies have focused on electroencephalography (EEG) and, to a lesser extent, functional magnetic resonance imaging (fMRI). Using these traditional neuroimaging tools, various proof-of-concept BCIs have been built to control the navigation of humanoid (i.e., human-like) robots [ 3 – 9 ], wheeled robots [ 10 – 12 ], flying robots [ 13 , 14 ], robotic wheelchairs [ 15 ], and assistive exoskeletons [ 16 ]. More recently functional near-infrared spectroscopy (fNIRS) has emerged as a good candidate for next generation BCIs, as fNIRS measures the hemodynamic response similar to fMRI [ 17 , 18 ] but with miniaturized sensors that can be used in field settings and even outdoors [ 19 , 20 ].…”
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
“…Many previous studies have focused on electroencephalography (EEG) and, to a lesser extent, functional magnetic resonance imaging (fMRI). Using these traditional neuroimaging tools, various proof-of-concept BCIs have been built to control the navigation of humanoid (i.e., human-like) robots [ 3 – 9 ], wheeled robots [ 10 – 12 ], flying robots [ 13 , 14 ], robotic wheelchairs [ 15 ], and assistive exoskeletons [ 16 ]. More recently functional near-infrared spectroscopy (fNIRS) has emerged as a good candidate for next generation BCIs, as fNIRS measures the hemodynamic response similar to fMRI [ 17 , 18 ] but with miniaturized sensors that can be used in field settings and even outdoors [ 19 , 20 ].…”
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
“…Research using neural interfaces follows several other unmanned aircraft that were successfully flown/controlled by a pilot using an electroencephalogram (EEG) as input and live onboard video as visual feedback. 38,39 All autonomous research would be done using the onboard avionics package in autopilot mode, which would control the aircraft. It is important to note that there would be a pilot capable of remotely taking over control if necessary, via a toggle switch on the transmitter.…”
This paper describes the development of a large 35%-scale unmanned aerobatic platform named the UIUC Aero Testbed, which is primarily intended to perform aerodynamics research in the full flight regime. The giant-scale aircraft with a 105-in (2.7-m) wingspan and weight of 37 lb (17 kg) was constructed from a commercially available radio control model aircraft with extensive modifications and upgrades including a 12-kW electric motor system that provides a thrust-to-weight ratio in excess of 2-to-1. It is equipped with an avionics suite that contains a high-frequency, high-resolution six degree-of-freedom (6-DOF) inertial measurement unit (IMU) that allows the system to collect aircraft state data. This information set can be used to generate high-fidelity aerodynamic data that can be used to validate high angle-of-attack flight-dynamic models. Collaboration in this project also led the Aero Testbed to have the capability to fly fully-and semi-autonomously in order to conduct autonomous flight research. A literature review of aerobatic unmanned aircraft used for research is first presented. Then the background and motivations for developing this platform are discussed. This is followed by a description of the planning and development that was involved. Finally, initial test flight results are presented, which include flight path trajectory plots of several aerobatic maneuvers.
Nomenclature
AV I= avionics integration ARF = almost ready to fly COT S = commercial off the shelf CG = center of gravity DOF = degree of freedom EEG = electroencephalogram IMU = inertial measurement unit RC = radio control
“…Many teams of scientists have carried out research on the application of BCI technology. For example, they applied the BCI technology to assistive exoskeletons [19], flying robots [20,21], humanoid robots for controlling the navigation [22][23][24][25][26][27][28][29], robotic wheelchairs [20,30,31], and wheeled robots [32][33][34].…”
A home-auxiliary robot system based on characteristics of the electrooculogram (EOG) and tongue signal is developed in the current study, which can provide daily life assistance for people with physical mobility disabilities. It relies on five simple actions (blinking twice in a row, tongue extension, upward tongue rolling, and left and right eye movements) of the human head itself to complete the motions (moving up/down/left/right and double-click) of a mouse in the system screen. In this paper, the brain network and BP neural network algorithms are used to identify these five types of actions. The result shows that, for all subjects, their average recognition rates of eye blinks and tongue movements (tongue extension and upward tongue rolling) were 90.17%, 88.00%, and 89.83%, respectively, and after training, the subjects can complete the five types of movements in sequence within 12 seconds. It means that people with physical disabilities can use the system to quickly and accurately complete life self-help, which brings great convenience to their lives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.