2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applicati 2019
DOI: 10.1109/civemsa45640.2019.9071600
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Auditory-tactile P300-based BCI Paradigm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
2

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 11 publications
0
2
2
Order By: Relevance
“…According to the 2 -values and ERP waveforms observed in the current study, The ERP component that appears at approximately 500 ms in Pz, or the late positive potential (LPP), is important for classification during the AV and V conditions. However, P300 enhancement was not observed, in contrast to the findings of some previous multimodal P300based BCI studies [24,23,8]. Nonetheless, other researchers have reported decreases in P300 [23].…”
Section: Discussioncontrasting
confidence: 92%
See 1 more Smart Citation
“…According to the 2 -values and ERP waveforms observed in the current study, The ERP component that appears at approximately 500 ms in Pz, or the late positive potential (LPP), is important for classification during the AV and V conditions. However, P300 enhancement was not observed, in contrast to the findings of some previous multimodal P300based BCI studies [24,23,8]. Nonetheless, other researchers have reported decreases in P300 [23].…”
Section: Discussioncontrasting
confidence: 92%
“…However, in these studies, BCIs used visual stimuli that appeared at different locations on the monitor (i.e., non-RSMP), while visual stimuli were presented only at the cen- ter of the monitor in the current study. This difference in presentation may cause target and nontarget ERP waveforms and enhancement of the P300 at Fz when using auditorytactile stimuli [8]. In our study, P300 amplitude was not enhanced by audiovisual stimuli.…”
Section: Discussioncontrasting
confidence: 48%
“…Despite its simplicity, LDA often brought out robust and acceptable classification results for previous EVE-BCIs. Depending on various parameter settings, other deformed DAs such as Bayesian Linear Discriminant Analysis (BLDA) [ 1 , 4 , 36 , 75 ], Shrinkage LDA [ 30 , 31 , 38 , 55 , 86 ], Regularized Discriminant Analysis (RDA)) [ 24 , 33 , 42 , 43 , 51 , 62 , 99 ], and step-wise Linear Discriminant Analysis (SWLDA) [ 74 , 88 , 91 , 92 , 98 ] were also used. In terms of popularity, followed by the DA, another robust supervised learning model, linear Support Vector Machine (linear SVM), was also implemented in [ 32 , 33 , 73 , 82 , 84 , 89 , 95 ].…”
Section: Resultsmentioning
confidence: 99%
“…This cross-modal system proved that the audio stimuli could compensate for the visual stimuli when users got tired of staring at the screen for long periods of time. In Jiang et al ( 2019 ), Jiang et al proposed an auditory-tactile P300 speller. Although the performance of the visual independent BCI system was worse than those depending on visualization, it was significant for patients with severe visual dysfunction.…”
Section: Methodsmentioning
confidence: 99%