1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat
DOI: 10.1109/ijcnn.1998.685962
|View full text |Cite
|
Sign up to set email alerts
|

Minimal distance neural methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 11 publications
(15 reference statements)
0
11
0
Order By: Relevance
“…This framework includes typical feedforward neural network models (MLP, RBF, SBF) , some novel networks (Distance-Based Multilayer Perceptrons (D-MLPs, [41]) and the nearest neighbor or minimum-distance networks [33,43]), as well as many variants of the nearest neighbor methods, improving upon the traditional approach by providing more flexible decision borders. This framework has been designed to enable meta-learning based on a search in the space of all possible models that may be systematically constructed.…”
Section: Similarity-based Framework For Meta-learningmentioning
confidence: 99%
“…This framework includes typical feedforward neural network models (MLP, RBF, SBF) , some novel networks (Distance-Based Multilayer Perceptrons (D-MLPs, [41]) and the nearest neighbor or minimum-distance networks [33,43]), as well as many variants of the nearest neighbor methods, improving upon the traditional approach by providing more flexible decision borders. This framework has been designed to enable meta-learning based on a search in the space of all possible models that may be systematically constructed.…”
Section: Similarity-based Framework For Meta-learningmentioning
confidence: 99%
“…values of all parameters and procedures employed. A general similarity-based model [6] of an adaptive system used for classification may include various types of parameters and procedures, such as: the set {R j } of reference vectors created from the set of training vectors {X i } by selection and/or optimization procedure; a similarity function D(·) (frequently a distance function or an activation function in neural networks) parameterized in various ways, or a table used to compute similarities for nominal attributes; a weighting function G(D(X, R)), estimating contribution of the reference vector R to the classification probability depending on its similarity to the vector X; the total cost function E[·] optimized during training. The cost function may include regularization terms, hints [3], it may depend upon a kernel function K(·), scaling the influence of the error, for a given training example, on the total cost, it may use a risk matrix R (C i |C j ) of assigning wrong classes or a matrix S(C i |C j ) measuring similarity of output classes.…”
Section: Distance Functions In Neural Networkmentioning
confidence: 99%
“…We will place our considerations in the general framework for similarity-based classification methods (SBMs) presented recently [6], [7]. Investigation of connections between neural network and similarity-based methods leads to a number of new neural network models.…”
Section: Introductionmentioning
confidence: 99%
“…A general framework for similarity-based methods has been presented recently [3], [4], [5]. Except for classical minimal distance methods, such as the k-NN method, many popular neural network models, such as the Radial Basis Functions (RBFs), Multilayer Perceptrons (MLPs), Learning Vector Quantization (LVQ), may be presented in this form.…”
Section: Introductionmentioning
confidence: 99%
“…calculation of optimal weighting factors s i in the metric function Eq. (4). If the cost function is taken as the number of classification errors (i.e.…”
mentioning
confidence: 99%