2014
DOI: 10.1016/j.neucom.2013.05.045
|View full text |Cite
|
Sign up to set email alerts
|

A co-training algorithm for EEG classification with biomimetic pattern recognition and sparse representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(17 citation statements)
references
References 20 publications
0
17
0
Order By: Relevance
“…While most of these methods are constrained to one set of sensors [44]–[50], a few efforts have focused on transfer between sensor types. In addition to the teacher-learner model we will discuss later [51], Hu and Yang [52] introduced a between-modality transfer technique that requires externally-provided information about the relationship between the source and domain spaces.…”
Section: Transfer Learning For Activity Recognitionmentioning
confidence: 99%
“…While most of these methods are constrained to one set of sensors [44]–[50], a few efforts have focused on transfer between sensor types. In addition to the teacher-learner model we will discuss later [51], Hu and Yang [52] introduced a between-modality transfer technique that requires externally-provided information about the relationship between the source and domain spaces.…”
Section: Transfer Learning For Activity Recognitionmentioning
confidence: 99%
“…There are more articles about the classification algorithms innovation [122][123][124][125][126][127]. Reference [125] uses a semi-supervised training method with cooperation of two kinds of classifiers to build a integrated classifier.…”
Section: Feature Classificationmentioning
confidence: 99%
“…Reference [125] uses a semi-supervised training method with cooperation of two kinds of classifiers to build a integrated classifier. Reference [127] proposes a classification method based on hidden conditional random field to classify MI signals.…”
Section: Feature Classificationmentioning
confidence: 99%
“…This is a very strong conclusion, which implies that if the two assumptions are satisfied and the target class is learnable from random classification noise, then the predictive accuracy of an initial weak learner can be boosted arbitrarily high using only unlabeled examples by co-training. Later, some theoretical analysis [7][13] was performed to relax the two remarkably powerful and easily violated assumptions. The theoretical details are not mentioned in this section.…”
Section: { }mentioning
confidence: 99%
“…As a consequence, semi-supervised learning [1][2] [3], which attempts to make use of costless and abundant unlabeled data in addition to labeled data to improve the performance, has attracted considerable attention. During the past decade, many semisupervised learning approaches have been developed, such as generative-based methods, graph-based methods, semi-supervised support vector machines (S3VMs) and co-training [5][6] [7].…”
Section: Introductionmentioning
confidence: 99%