2019
DOI: 10.1016/j.bspc.2019.01.006
|View full text |Cite
|
Sign up to set email alerts
|

Transfer learning in imagined speech EEG-based BCIs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(30 citation statements)
references
References 21 publications
0
30
0
Order By: Relevance
“…Specifically, TL is applied for improving the performance of the classifier on a new subject (target subject) using the knowledge learnt from a set of different subjects (source subjects). Similar to García-Salinas et al ( 2019 ), the two TL paradigms come under the class of multi-task transfer learning (Evgeniou and Pontil, 2004 ). A deep CNN architecture, similar to the one proposed in Schirrmeister et al ( 2017 ), is used in this work.…”
Section: Feature Extraction and Classificationmentioning
confidence: 94%
See 3 more Smart Citations
“…Specifically, TL is applied for improving the performance of the classifier on a new subject (target subject) using the knowledge learnt from a set of different subjects (source subjects). Similar to García-Salinas et al ( 2019 ), the two TL paradigms come under the class of multi-task transfer learning (Evgeniou and Pontil, 2004 ). A deep CNN architecture, similar to the one proposed in Schirrmeister et al ( 2017 ), is used in this work.…”
Section: Feature Extraction and Classificationmentioning
confidence: 94%
“…Transfer learning (TL) is used in García-Salinas et al ( 2019 ) and Cooney et al ( 2019 ) for improving the performance of the classifier. TL is a machine learning approach in which the performance of a classifier in the target domain is improved by incorporating the knowledge learnt from a different domain (Pan and Yang, 2009 ; He and Wu, 2017 ; García-Salinas et al, 2019 ). Specifically in García-Salinas et al ( 2019 ), feature representation transfer is used for representing a new imagined word using the codewords learnt using a set of four other imagined words.…”
Section: Feature Extraction and Classificationmentioning
confidence: 99%
See 2 more Smart Citations
“…The result was a principled way of dealing with outliers in simple data that was, yet achieved the AUC value of 0.75 in cross-subject BCI ( Kadioglu et al, 2018 ). As for the pattern recognition, some algorithms ( Gordon et al, 2017 ; Hajinoroozi et al, 2017 ; García-Salinas et al, 2019 ; Fernandez-Rodriguez et al, 2020 ), such as genetic algorithm and transfer learning ( Huang et al, 2011 , 2017 ; Jalilpour et al, 2020 ), were applied to improve the accuracy and speed of the spellers. On the other hand, the best AUC performance was still not higher than 0.75 ( Krusienski et al, 2006 ; Koçanaoğulları et al, 2020 ), and most works were still based on small samples.…”
Section: Introductionmentioning
confidence: 99%