2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2011
DOI: 10.1109/icassp.2011.5946458
|View full text |Cite
|
Sign up to set email alerts
|

Sparse common spatial patterns in brain computer interface applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 33 publications
(21 citation statements)
references
References 8 publications
0
21
0
Order By: Relevance
“…Regularization of the covariance matrix (Lu et al, 2009;Kang et al, 2009;Lu et al, 2010;Lotte and Guan, 2011) is also a common approach to increase robustness, especially in small-sample settings. Other authors (Lal et al, 2004;Arvaneh et al, 2011;Goksu et al, 2011) propose to improve the CSP solution by performing channel selection or enforcing sparsity on the spatial filters. The idea of computing CSP in a region of interest was used in (Grosse-Wentrup et al, 2007, 2009Sannelli et al, 2011).…”
Section: Robust Estimationmentioning
confidence: 99%
“…Regularization of the covariance matrix (Lu et al, 2009;Kang et al, 2009;Lu et al, 2010;Lotte and Guan, 2011) is also a common approach to increase robustness, especially in small-sample settings. Other authors (Lal et al, 2004;Arvaneh et al, 2011;Goksu et al, 2011) propose to improve the CSP solution by performing channel selection or enforcing sparsity on the spatial filters. The idea of computing CSP in a region of interest was used in (Grosse-Wentrup et al, 2007, 2009Sannelli et al, 2011).…”
Section: Robust Estimationmentioning
confidence: 99%
“…We can decrease the number of channels to the desired cardinality level by recursively applying this algorithm to the smaller matrices. Each cardinality reduction involves solving a traditional CSP, therefore this method is faster than other 0 norm based greedy search algorithms such as BE or FS [9].…”
Section: Recursive Weight Eliminationmentioning
confidence: 99%
“…Recently, quasi 0 norm based methods was used for obtaining the sparse solution which resulted an improved classification accuracy. Since 0 norm is non-convex, combinatorial and NP-hard, they implemented greedy solutions such as forward selection (FS), backward elimination (BE) [9] and recursive weight elimination (RWE) [10] to decrease the computational complexity. It has been shown that BE was better than RWE and FS (less myopic) in terms of classification error and sparseness level but associated with very high complexity making it difficult to use in rapid prototyping scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…These studies have reported a slight decrease or no change in the classification accuracy while decreasing the number of channels significantly. Recently, in [8] quasi 0 norm based criterion was used for obtaining the sparse solution which resulted an improved classification accuracy. Since 0 norm is non-convex, combinatorial and NP-hard, they implemented greedy solutions such as Forward Selection (FS) and Backward Elimination (BE) to decrease the computational complexity.…”
Section: Introductionmentioning
confidence: 99%