2010 Sixth International Conference on Natural Computation 2010
DOI: 10.1109/icnc.2010.5583589
|View full text |Cite
|
Sign up to set email alerts
|

The Incremental Probabilistic Neural Network

Abstract: With

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…Within the first hidden layer of the PNN, the node outputs are calculated as [24, 25] Zi,j=exp][false(xbold-italicwi,jfalse)normalTfalse(xbold-italicwi,jfalse)2σ2,1emi=1,2,,p,1emj=1,2,,c where Zi,j is the probability output value of a hidden node, x is the input vector x=false[x1,x2,,xn false]normalT, bold-italicwi,j is the i th element from class j , that is, the training vector bold-italicwi,j=false[w1,w2,,wn false]normalT, p represents the number of input training patterns for each class and c is the number of classes.…”
Section: Pnn and Missing Featuresmentioning
confidence: 99%
“…Within the first hidden layer of the PNN, the node outputs are calculated as [24, 25] Zi,j=exp][false(xbold-italicwi,jfalse)normalTfalse(xbold-italicwi,jfalse)2σ2,1emi=1,2,,p,1emj=1,2,,c where Zi,j is the probability output value of a hidden node, x is the input vector x=false[x1,x2,,xn false]normalT, bold-italicwi,j is the i th element from class j , that is, the training vector bold-italicwi,j=false[w1,w2,,wn false]normalT, p represents the number of input training patterns for each class and c is the number of classes.…”
Section: Pnn and Missing Featuresmentioning
confidence: 99%
“…4. Principally, the calculations of the APNN are as follows: After constructing the input values, hidden or pattern layer values are extracted according to [35,36]:…”
Section: Roi Of Middlementioning
confidence: 99%
“…The RPNN operations can be explained as follows: the hidden values of the hidden layer are calculated according to (7) [26], this equation is also known as the RBFright leftthickmathspace.5emZi,j=normalexp(xλwi,jλ)T(xλwi,jλ)2σ2,i=1,2,,p,j=1,2,,cwhere Zi,thinmathspacej represents a hidden layer node value, bold-italicxλ represents the input vector bold-italicxλ=false[x1λ,thinmathspacex2λ,thinmathspace,thinmathspacexnλfalse]normalT, bold-italicwi,thinmathspacejλ is the ith vector of class j ; thus, the weight vector can be described or formed as bold-italicwi,thinmathspacejλ=false[w1λ,thinmathspacew2λ,thinmathspace,thinmathspacewnλfalse]normalT, λ represents the certain spe...…”
Section: Re‐enforced Pnnmentioning
confidence: 99%