2016
DOI: 10.1109/thms.2016.2573827
|View full text |Cite
|
Sign up to set email alerts
|

Optimized Bi-Objective EEG Channel Selection and Cross-Subject Generalization With Brain–Computer Interfaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
41
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(45 citation statements)
references
References 39 publications
2
41
0
Order By: Relevance
“…Another advantage of using EAs as the search algorithm within the wrapper procedure is that they are well-suited to solve MOOPs, allowing the search of solutions that, having a high training accuracy, meet other objectives too, such as a small number of selected features, a high generalization capability, or even anatomical and functional relevance of EEG channels [53]. Although there are currently several well known implementations of the MOEA concept, such as MOEA/D [54], PAES [55], SPEA2 [56] or NSGA-II [44], since the aim of this paper is not to analyze the characteristics of the a specific MOEA but rather applying a MOEA as the search algorithm for the proposed wrapper method, NSGA-II has been used because it is quite well known and widely used.…”
Section: Proposed Multi-objective Evolutionary Wrapper Methodsmentioning
confidence: 99%
“…Another advantage of using EAs as the search algorithm within the wrapper procedure is that they are well-suited to solve MOOPs, allowing the search of solutions that, having a high training accuracy, meet other objectives too, such as a small number of selected features, a high generalization capability, or even anatomical and functional relevance of EEG channels [53]. Although there are currently several well known implementations of the MOEA concept, such as MOEA/D [54], PAES [55], SPEA2 [56] or NSGA-II [44], since the aim of this paper is not to analyze the characteristics of the a specific MOEA but rather applying a MOEA as the search algorithm for the proposed wrapper method, NSGA-II has been used because it is quite well known and widely used.…”
Section: Proposed Multi-objective Evolutionary Wrapper Methodsmentioning
confidence: 99%
“…Most previous BCI studies used relatively small to moderate training sample size (~8-30 subjects) and used all subjects' training data to train one model (training set usually divided into 80% of each user's EEG data and the remaining was held out for testing). Then, the accuracy of this trained model has been reported on each user's test data separately [28], [41], [45], [64], [66], [71], [72]. The motivation in the previous studies to train a single model on all users' training data, while the model was tested on each individual separately was likely, because the classification performance of most classifiers usually increase with more training data.…”
Section: Introductionmentioning
confidence: 99%
“…Several studies [1], [7], [28], [32], [40], [42], [44], [45], [50], [51], [62], [66], [71], [72], [77] have investigated the classification of MI-EEG to develop a BCI system that can provide feedback during MI training and eventually use the BCI to enhance the life of patients with disabilities and paralysis.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, it is vital to opt for an effective solution to select the optimal number of channels rather than using all channels for processing and classification. Some researchers have used feature selection algorithms after applying the channel selection algorithms to further improve the system performance [22]. Subset channels are selected based on certain criteria that usually incorporates all aspects including channel location, dependency and redundancy [23].…”
Section: Introductionmentioning
confidence: 99%