Neurocomputing 1990
DOI: 10.1007/978-3-642-76153-9_5
|View full text |Cite
|
Sign up to set email alerts
|

Single-layer learning revisited: a stepwise procedure for building and training a neural network

Abstract: A stepwise procedure' for building and training a neural network intended to perform classification tasks, based on single layer learning rules, is presented. This procedure breaks up the classification task into subtasks of increasing complexity in order to make learning easier.The network structure is not fixed in advance: it is subject to a growth process during learning.Therefore, after training, the architecture of the network is guaranteed to be well adapted for the classification problem.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
294
0
8

Year Published

2002
2002
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 576 publications
(302 citation statements)
references
References 6 publications
0
294
0
8
Order By: Relevance
“…Pairwise methods have previously been shown to improve performance for neural networks (Knerr et al 1990(Knerr et al , 1992Price et al 1995;Lu and Ito 1999), support vector machines (Schmidt and Gish 1996;Hastie and Tibshirani 1998;Kreßel 1999;Hsu and Lin 2002), statistical learning (Bradley and Terry 1952;Friedman 1996), rule and decision tree learning (Fürnkranz 2002(Fürnkranz , 2003 and others. Moreover, for two of the datasets used in this study, we have also experimented with calibrated pairwise ranking using support vector machines as base classifiers, and obtained similar results (Brinker et al 2006).…”
Section: Learning Algorithmsmentioning
confidence: 99%
“…Pairwise methods have previously been shown to improve performance for neural networks (Knerr et al 1990(Knerr et al , 1992Price et al 1995;Lu and Ito 1999), support vector machines (Schmidt and Gish 1996;Hastie and Tibshirani 1998;Kreßel 1999;Hsu and Lin 2002), statistical learning (Bradley and Terry 1952;Friedman 1996), rule and decision tree learning (Fürnkranz 2002(Fürnkranz , 2003 and others. Moreover, for two of the datasets used in this study, we have also experimented with calibrated pairwise ranking using support vector machines as base classifiers, and obtained similar results (Brinker et al 2006).…”
Section: Learning Algorithmsmentioning
confidence: 99%
“…The 'One-against-one' [7] multiclass approach is implemented in this study as it has been shown to have comparable if not better generalized accuracy than alternative techniques and requires considerably less training time [8], [9]. The method consists of constructing an SVM for each pair of classes.…”
Section: Support Vector Machinesmentioning
confidence: 99%
“…Classification of a test pattern is done according to the maximum output of these ten classifiers. There are some other ways to combine many two-class classifiers into a multiclass classifier (Platt, Cristianini, & Shawe Taylor, in Press;Friedman, 1996;Knerr, Personnaz, & Dreyfus, 1990).…”
Section: Experimental Settingsmentioning
confidence: 99%