2015
DOI: 10.1007/s00500-015-1690-9
|View full text |Cite
|
Sign up to set email alerts
|

Reducing noise impact on MLP training

Abstract: In this paper we propose and discuss several new approaches to noise-resistant training of multilayer perceptron neural networks. Two groups of approaches: input ones, based on instance selection and outlier detection, and output ones, based on modified robust error objective functions, are presented and compared. In addition we compare them to some known methods. The experimental evaluation of the methods on classification and regression tasks and comparison of their performances for different amounts of nois… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(13 citation statements)
references
References 25 publications
(49 reference statements)
0
11
0
Order By: Relevance
“…It is to be seen that BPNN-NAR is a feedforward neural system write demonstrate, while BPNN-NARMA is an intermittent neural system compose display [18]. This investigation reports the blunder measures on the test dataset, which is the most key trademark, mirroring ANN's speculation capacity [19,20]. Table 2 demonstrates the information parceling for organize pre-handling.…”
Section: Methodsmentioning
confidence: 99%
“…It is to be seen that BPNN-NAR is a feedforward neural system write demonstrate, while BPNN-NARMA is an intermittent neural system compose display [18]. This investigation reports the blunder measures on the test dataset, which is the most key trademark, mirroring ANN's speculation capacity [19,20]. Table 2 demonstrates the information parceling for organize pre-handling.…”
Section: Methodsmentioning
confidence: 99%
“…In this context, Jorgensen and Duffy stated that the accuracy of a model cannot exceed the accuracy of the experimental data [6]. Although this statement is correct, it can further be consolidated since machine learning (ML) algorithms are capable of dealing with errors in the training data [11]. To put it differently, the observed performance of a model cannot be better than the internal error of the test set.…”
Section: • Training and Testing The Modelmentioning
confidence: 99%
“…It is also worth mentioning that the use of instance selection methods is not limited to the nearest neighbor algorithm, but has also found applications to many other predictors. For example, Kordos and Rusiecki (2016) evaluated various approaches to MLP neural network training in the presence of noise. Their results pointed out that the ENN was among the best methods.…”
Section: 1mentioning
confidence: 99%