2010 International Conference on Communication Control and Computing Technologies 2010
DOI: 10.1109/icccct.2010.5670773
|View full text |Cite
|
Sign up to set email alerts
|

Power signal classification using Adaptive Wavelet Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2014
2014

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…right leftthickmathspace.5emGppjσ=1)(2πσ2f0/2exp)(ppj22σ2;j=1,2,3,,M where M is the total number of training patterns. Now, the input–output mapping function of the normalised RBF network takes the following form ϕfalse(pfalse)=thinmathspacej=1Mdjexp∥∥thinmathspaceppj2/2σ2thinmathspacej=1Mexp∥∥thinmathspaceppj2/2σ2 where d j is the linear weight applied to the basis function [15], and the denominator term represents the Parzen–Rosenblatt density estimator, which consists of the sum of ‘ M ’ multivariate Gaussian distribution centred on the data point p 1 , p 2 , p 3 , …, p M . In (23), the centres of the normalised RBF coincide with the data points falsefalse{p1ptjfalsefalse}thinmathspacej=1M.…”
Section: Classification Using Pnnmentioning
confidence: 99%
See 1 more Smart Citation
“…right leftthickmathspace.5emGppjσ=1)(2πσ2f0/2exp)(ppj22σ2;j=1,2,3,,M where M is the total number of training patterns. Now, the input–output mapping function of the normalised RBF network takes the following form ϕfalse(pfalse)=thinmathspacej=1Mdjexp∥∥thinmathspaceppj2/2σ2thinmathspacej=1Mexp∥∥thinmathspaceppj2/2σ2 where d j is the linear weight applied to the basis function [15], and the denominator term represents the Parzen–Rosenblatt density estimator, which consists of the sum of ‘ M ’ multivariate Gaussian distribution centred on the data point p 1 , p 2 , p 3 , …, p M . In (23), the centres of the normalised RBF coincide with the data points falsefalse{p1ptjfalsefalse}thinmathspacej=1M.…”
Section: Classification Using Pnnmentioning
confidence: 99%
“…Among the various classifiers, the neural network illustrates relatively efficient classification results, but suffers from trial and error approach of selecting number of hidden layers, training methods and so on. Among the various networks available, probabilistic neural network (PNN) [1315] is the most popular one and has been effectively used to accurately identify various types of patterns. In PNN, the number of nodes in the hidden layer is equal to the number of training vectors.…”
Section: Introductionmentioning
confidence: 99%