2017
DOI: 10.4467/20838476si.16.012.6193
|View full text |Cite
|
Sign up to set email alerts
|

Data Selection for Neural Networks

Abstract: Abstract. Several approaches to joined feature and instance selection in neural network leaning are discussed and experimentally evaluated in respect to classification accuracy and dataset compression, considering also their computational complexity. These include various versions of feature and instance selection prior to the network learning, the selection embedded in the neural network and hybrid approaches, including solutions developed by us. The advantages and disadvantages of each approach are discussed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 20 publications
1
4
0
Order By: Relevance
“…Our results indicate that the use of the ENN algorithm in our framework significantly improves training time, a finding consistent with previous studies [27][28][29]. However, reducing the size of the training dataset does slightly impact the accuracy of our deep learning models.…”
Section: Resultssupporting
confidence: 90%
See 1 more Smart Citation
“…Our results indicate that the use of the ENN algorithm in our framework significantly improves training time, a finding consistent with previous studies [27][28][29]. However, reducing the size of the training dataset does slightly impact the accuracy of our deep learning models.…”
Section: Resultssupporting
confidence: 90%
“…Instance Reduction Techniques (IR), sometimes referred to as Instance Selection Algorithms (IS), reduce the size of the training dataset by selecting which instances to keep for use in the generalization process [14]. Instance Reduction Techniques are most commonly utilized in the context of Instance-Based Learning (IBL) [26], although they are used to improve classification accuracy by removing noisy instances, speed up neural networks [27,28], and reduce excessive memory storage [29].…”
Section: Instance Reduction Techniques (Ir)mentioning
confidence: 99%
“…Based on Demuth et al ( 2014 ) and Kordos ( 2016 ), the choice of data set size is closely related to the choice of the number of neurons in the neural network (explained in the network architecture, section Research Methodology and Experimental Design). In our case, given that the entire neural network training process is iterative, it is the network performance that indicates that we have enough data.…”
Section: Methodological Approachmentioning
confidence: 99%
“…Several studies have demonstrated the potential of feature selection methods to improve predictors in recent years ( [5], [9], [41], [47]). Since feature selection aims to reduce the dimension of a dataset by selecting variables that are relevant to the predicting attribute(s), recursive feature elimination (RFE) has been performed to eliminate some of the original input features and retain the minimum subset of features that yield the best classification performance.…”
Section: Reviewsmentioning
confidence: 99%