2005
DOI: 10.1016/j.neunet.2005.06.018
|View full text |Cite
|
Sign up to set email alerts
|

A new classifier based on information theoretic learning with unlabeled data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 6 publications
0
12
0
Order By: Relevance
“…As one of the criteria utilizing distributions, the ED between the distribution of transmitted symbol f D (d) and the equalizer output distribution f Y (y) is defined as (3) [3,6].…”
Section: Ed Criterion and Entropymentioning
confidence: 99%
See 1 more Smart Citation
“…As one of the criteria utilizing distributions, the ED between the distribution of transmitted symbol f D (d) and the equalizer output distribution f Y (y) is defined as (3) [3,6].…”
Section: Ed Criterion and Entropymentioning
confidence: 99%
“…For training of adaptive systems for medical diagnosis, the ED criterion has been successfully applied to distinguish biomedical datasets [6]. For finite impulse response (FIR) adaptive filter structures in impulsive noise environments, ED between the output distribution and a set of Dirac delta functions has been used as an efficient performance criterion taking advantage of the outlier-cutting effect of Gaussian kernel for output pairs and symbol-output pairs [7].…”
Section: Introductionmentioning
confidence: 99%
“…Based on minimization of ED, the authors in [18] proposed using the ED criterion to train the adaptive system in order to match two PDFs from different output patterns, and successively applied it to classification problems with a real biomedical data set [13]. As a similar approach, [14] applied an ED minimization method to compensate for sensor drift in which the RBFN weights are readjusted even during the test phase, but only with the use of output data samples that were obtained during the training phase.…”
Section: Test Phase Readjustment For Sensor Drift Compensationmentioning
confidence: 99%
“…As a robust ITL-type algorithm, the Euclidean distance minimization between PDFs has been introduced by Jeong et al and applied successfully to the classification problem with a real biomedical data set [10]. The researchers in [10] proposed to reuse the previously acquired training-phase output samples in the test phase so that the test-phase output PDF follows the training-phase output PDF. In the research [11] for blind equalization, the Euclidean distance minimization method is applied using signal power for blind equalization.…”
Section: Introductionmentioning
confidence: 99%