1998
DOI: 10.1016/s0925-2312(98)00027-7
|View full text |Cite
|
Sign up to set email alerts
|

Normalized Gaussian Radial Basis Function networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
65
0

Year Published

1999
1999
2019
2019

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 117 publications
(65 citation statements)
references
References 6 publications
0
65
0
Order By: Relevance
“…Normalized Gaussian radial basis function network (Bugmann 1998) uses the k-means clustering algorithm to provide the basis functions and learns either a logistic regression (discrete class problems) or linear regression (numeric class problems) on top of that. SMO SMO (Keerthi et al 2001) is an algorithm of support vector machine classifier that globally transforms nominal attributes into binary ones and multiclass problems are solved using pairwise classification.…”
Section: Normalized Gaussian Radial Basis Function Network (Rbf)mentioning
confidence: 99%
“…Normalized Gaussian radial basis function network (Bugmann 1998) uses the k-means clustering algorithm to provide the basis functions and learns either a logistic regression (discrete class problems) or linear regression (numeric class problems) on top of that. SMO SMO (Keerthi et al 2001) is an algorithm of support vector machine classifier that globally transforms nominal attributes into binary ones and multiclass problems are solved using pairwise classification.…”
Section: Normalized Gaussian Radial Basis Function Network (Rbf)mentioning
confidence: 99%
“…In contrast, in standard RBF nets, a significant output requires a hidden node with its centre close to the input vector, or using hidden nodes with wider receptive fields (larger σ ). It was shown in Bugmann (1998) [5] that for best interpolation, standard RBF nets require values of σ close to one half of the largest distance between nearest neighbours (which is here 0.46), reflecting the need to ensure large enough output values over the largest empty space between training data. Performances are better for larger receptive field sizes.…”
Section: The Plateau-valley Classification Problemmentioning
confidence: 99%
“…One of the key features of NRBF nets is their excellent generalization, a property that can be exploited to reduce the number of hidden nodes in classification tasks. A new learning rule was proposed to that effect by Bugmann (1998) [5] and is summarized in section 3.…”
Section: Introductionmentioning
confidence: 99%
“…NGnets differ from RBF networks in the normalization of the Gaussian activation function. The normalization switches the traditional roles of weights and activities in the hidden layer, and NGnets therefore exhibit better generalization properties [1].…”
Section: Introductionmentioning
confidence: 99%