2006
DOI: 10.1142/s0218213006002722
|View full text |Cite
|
Sign up to set email alerts
|

New Self-Adaptive Probabilistic Neural Networks in Bioinformatic and Medical Tasks

Abstract: We propose a self–adaptive probabilistic neural network model, which incorporates optimization algorithms to determine its spread parameters. The performance of the proposed model is investigated on two protein localization problems, as well as on two medical diagnostic tasks. Experimental results are compared with that of feedforward neural networks and support vector machines. Different sampling techniques are used and statistical tests are conducted to calculate the statistical significance of the results.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0
1

Year Published

2008
2008
2016
2016

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 30 publications
(19 citation statements)
references
References 19 publications
0
18
0
1
Order By: Relevance
“…The best results are marked with bold. As shown, in each data classification case, the PNN models trained by means of Algorithm 1 or Algorithm 2 outperform PNNs trained using state-of-the-art methods (Georgiou et al 2006;Chang et al 2008;Georgiou et al 2008;Saiti et al 2009;Temurtas et al 2009;Chandra and Babu 2011;Yeh and Lin 2011;Azar and El-Said 2013). Our algorithms also perform better than the reference classifiers in all considered data set classification problems.…”
Section: Comparison To State-of-the-art Proceduresmentioning
confidence: 61%
“…The best results are marked with bold. As shown, in each data classification case, the PNN models trained by means of Algorithm 1 or Algorithm 2 outperform PNNs trained using state-of-the-art methods (Georgiou et al 2006;Chang et al 2008;Georgiou et al 2008;Saiti et al 2009;Temurtas et al 2009;Chandra and Babu 2011;Yeh and Lin 2011;Azar and El-Said 2013). Our algorithms also perform better than the reference classifiers in all considered data set classification problems.…”
Section: Comparison To State-of-the-art Proceduresmentioning
confidence: 61%
“…Moreover, the CPU training times are also reported in Tables 3 and 4. In order to evaluate the performance of our model, we have applied these six benchmark problems to Homoscedastic and Heteroscedastic Evolutionary Probabilistic Neural Networks [1] as well as to original PNNs and Bagging EPNNs [2]. For the original PNN's implementation, an exhaustive search for the selection of the spread parameter σ has been conducted in the interval [10 −3 , 5] and the σ that resulted to the best classification accuracy on the training set has been used for the calculation of PNN's classification accuracy on the test set.…”
Section: Resultsmentioning
confidence: 99%
“…The training procedure of a PNN is quite simple and requires only a single pass of the patterns of the training data which results to a short training time. The architecture of a PNN always consists of four layers: the input layer , the pattern layer , the summation layer and the output layer [1,3].…”
Section: Background Materialsmentioning
confidence: 99%
See 2 more Smart Citations