2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence) 2008
DOI: 10.1109/ijcnn.2008.4634217
|View full text |Cite
|
Sign up to set email alerts
|

Efficient supervised learning with reduced training exemplars

Abstract: In this article, we propose a new supervised learning approach for pattern classification applications involving large or imbalanced data sets. In this approach, a clustering technique is employed to reduce the original training set into a smaller set of representative training exemplars, represented by weighted cluster centers and their target outputs. Based on the proposed learning approach, two training algorithms are derived for feedforward neural networks. These algorithms are implemented and tested on tw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…The measured carbonation depth is assigned as output neuron. The applied learning algorithm is Levenberg-Marquardt [58]. It is the fastest backpropagation procedure that updates weight and bias values in the negative gradient direction.…”
Section: Trainingmentioning
confidence: 99%
“…The measured carbonation depth is assigned as output neuron. The applied learning algorithm is Levenberg-Marquardt [58]. It is the fastest backpropagation procedure that updates weight and bias values in the negative gradient direction.…”
Section: Trainingmentioning
confidence: 99%
“…Random sampling appears to perform well for reducing the training set size while retaining prediction performance [PJO99,WNC05]. There exist more complex approaches to this problem, one uses centroids of weighted clusters which essentially groups similar items in the training set and treating them as one item [NBP08]. With our approach with respect to data availability, we are interested in minimizing the data required to make accuracy predictions.…”
Section: Impact Of Training Data Availability On Prediction Accuracymentioning
confidence: 99%
“…Unlike online learning, the performance and applicability of batch learning generally are limited by how much training data a given system memory can handle. However, online learning cannot fully optimize a cost function defined for training examples, whereas batch learning can completely optimize a cost function defined for training examples (Nguyen, Bouzerdoum, and Phung ).…”
Section: Introductionmentioning
confidence: 99%