2015
DOI: 10.1109/jproc.2015.2404941
|View full text |Cite
|
Sign up to set email alerts
|

Signal Processing Approaches to Minimize or Suppress Calibration Time in Oscillatory Activity-Based Brain–Computer Interfaces

Abstract: One of the major limitations of Brain-Computer Interfaces (BCI) is their long calibration time, which limits their use in practice, both by patients and healthy users alike. Such long calibration times are due to the large between-user variability and thus to the need to collect numerous training electroencephalography (EEG) trials for the machine learning algorithms used in BCI design. In this paper, we first survey existing approaches to reduce or suppress calibration time, these approaches being notably bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
175
0
1

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 228 publications
(181 citation statements)
references
References 72 publications
0
175
0
1
Order By: Relevance
“…It is therefore essential to estimate the generalization of selected models. Methods such as regularization, shrinkage or cross validation may prevent over fitting to the data [81-84]. Overfitting means that PR models memorize the training data.…”
Section: Signal Processing and Decodingmentioning
confidence: 99%
“…It is therefore essential to estimate the generalization of selected models. Methods such as regularization, shrinkage or cross validation may prevent over fitting to the data [81-84]. Overfitting means that PR models memorize the training data.…”
Section: Signal Processing and Decodingmentioning
confidence: 99%
“…Many signal processing and machine learning approaches have been proposed to reduce the BCI calibration effort [29], [30]. They may be grouped into five categories [29]: 1) Regularization, which is a very effective machine learning approach for constructing robust models [41], especially when the training data size is small.…”
Section: Introductionmentioning
confidence: 99%
“…They may be grouped into five categories [29]: 1) Regularization, which is a very effective machine learning approach for constructing robust models [41], especially when the training data size is small. A popular regularization approach in BCI calibration is shrinkage [27], which gives a regularized estimate of the covariance matrices.…”
Section: Introductionmentioning
confidence: 99%
“…Both cases however, rely upon the natural distribution of the data as grouped by their original participant. Some attempts have been made to group datasets by known variants such as gender [8], and others using the information extracted from the trained models [9]; but little has been done in regards to instance selection for each model.…”
Section: Related Work On Transfer Learning In Bcimentioning
confidence: 99%
“…This, like most BCI ensembles [11], used naive partitioning in which the instances were divided by their associated labels, whether it be by source domain or by stimuli. This proves useful for weighting classifiers within the ensembles; allowing information regarding the appropriateness of each model and the test-domain to be extracted [9]. It was demonstrated in [11] that overlapping these naive divisions can actually increase accuracy, suggesting that having the same training data duplicated amongst the classifiers can benefit the overall performance.…”
Section: Ensemblesmentioning
confidence: 99%