2017
DOI: 10.1016/j.neunet.2017.07.018
|View full text |Cite
|
Sign up to set email alerts
|

Neural network for regression problems with reduced training sets

Abstract: Although they are powerful and successful in many applications, artificial neural networks (ANNs) typically do not perform well with complex problems that have a limited number of training cases. Often, collecting additional training data may not be feasible or may be costly. Thus, this work presents a new radial-basis network (RBN) design that overcomes the limitations of using ANNs to accurately model regression problems with minimal training data. This new design involves a multi-stage training process that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 62 publications
(28 citation statements)
references
References 29 publications
0
26
0
2
Order By: Relevance
“…If the network's answer is not satisfactory (mistake is too big) a modi cation of the network's weights take place, in such a way that at the next presentation of the example, the error is smaller. The majority of the neural network training algorithms belong to gradient methods of training (Bataineh and Marler, 2017). However, their detailed explanation exceeds the capabilities of this work.…”
Section: The Training Process Of a Neural Networkmentioning
confidence: 97%
“…If the network's answer is not satisfactory (mistake is too big) a modi cation of the network's weights take place, in such a way that at the next presentation of the example, the error is smaller. The majority of the neural network training algorithms belong to gradient methods of training (Bataineh and Marler, 2017). However, their detailed explanation exceeds the capabilities of this work.…”
Section: The Training Process Of a Neural Networkmentioning
confidence: 97%
“…Their approach utilizes first computing the standard deviation of each cluster (after applying a k-means like clustering on the data) and then using a scaled version of those standard deviations of each cluster as the shape parameter for each RBF in the network. The work in [2] also used a similar approach by using the root-meansquare-deviation (RMSD) value between the RBF centers and the data value for each RBF in the network. The authors used a modified orthogonal least squares (OLS) algorithm to select the RBF centers.…”
Section: Related Workmentioning
confidence: 99%
“…where is the number of training data samples and is the number of frequency samples which is equal to 200. The large amount of training data has been reduced to be 27 samples only (Bataineh and Marler, 2017). The reduction procedure depends on the selection of resonant frequency samples from the training data.…”
Section: Prior Knowledge Input With Differentmentioning
confidence: 99%