2010 Annual International Conference of the IEEE Engineering in Medicine and Biology 2010
DOI: 10.1109/iembs.2010.5627642
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of different dimensionality reduction and feature selection methods for single trial ERP detection

Abstract: Dimensionality reduction and feature selection is an important aspect of electroencephalography based event related potential detection systems such as brain computer interfaces. In our study, a predefined sequence of letters was presented to subjects in a Rapid Serial Visual Presentation (RSVP) paradigm. EEG data were collected and analyzed offline. A linear discriminant analysis (LDA) classifier was designed as the ERP (Event Related Potential) detector for its simplicity. Different dimensionality reduction … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 3 publications
0
2
0
Order By: Relevance
“…This yielded a high dimensional feature space with d = n channels × 358 coefficients per trial. To facilitate classification by LDA, this high dimensionality was reduced by PCA to ten dimensions per trial (Lan et al, 2010). We used all 10 principal components for classification since only components with the highest variance were not ensured to be informative for classification (Lugger et al, 1998).…”
Section: Methodsmentioning
confidence: 99%
“…This yielded a high dimensional feature space with d = n channels × 358 coefficients per trial. To facilitate classification by LDA, this high dimensionality was reduced by PCA to ten dimensions per trial (Lan et al, 2010). We used all 10 principal components for classification since only components with the highest variance were not ensured to be informative for classification (Lugger et al, 1998).…”
Section: Methodsmentioning
confidence: 99%
“…From the works addressed above, one can infer that data-based features for setting up ML/DL models are of key importance. In this context, two main steps can be identified: (a) feature extraction, which can be done using specific layers of consolidated CNN architectures, such as VGG16-widely known for its capabilities in artificial deep vision tasks [46][47][48]-benefiting from ImageNet [49] weights; and (b) dimensionality reduction, achievable through techniques such as, for example, principle component analysis (PCA), which is also capable of retaining the relevant variance of the data to preserve its intrinsic characteristics [50][51][52].…”
Section: Related Workmentioning
confidence: 99%