2018
DOI: 10.26438/ijcse/v6i5.241254
|View full text |Cite
|
Sign up to set email alerts
|

The Classification of Data A Novel Artificial Neural Network (ANN) Approach through Exhaustive Validation and Weight Initialization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The proposed methodology is based on n-fold trainingvalidation-test approach which is combination of n-fold cross-validation with validation [21]. In n-fold crossvalidation the dataset is initially divided into 'n' folds.…”
Section: Proposed Methodologymentioning
confidence: 99%
“…The proposed methodology is based on n-fold trainingvalidation-test approach which is combination of n-fold cross-validation with validation [21]. In n-fold crossvalidation the dataset is initially divided into 'n' folds.…”
Section: Proposed Methodologymentioning
confidence: 99%
“…So for serving maximum exposure to both classes, we are doing 15 fold cross-validation and each time 13 folds are used for training and 14th and 15th fold are used respectively for validation and testing in each experiment and each set is acting as test set. It is also revealed from K-fold TVT [9]( table 1) approach that the solution of the problem of over-fitting in ANN is hidden in the data itself i.e. we are discovering most friendly validation set to the test set to avoid over-fitting.…”
Section: Classification Through K-fold Tvt Approachmentioning
confidence: 99%
“…They have the unreasonable capability to establish a relationship between the input and output data using mathematical techniques where the data format must be numeric. ANNs deliver excellent results in the classification of data [ 1 , 2 , 3 ], which come under supervised learning. In this article, we explore Kohonen's SOM as well as their one‐ (1D) and two‐dimensional (2D) clustering spaces and observe that they are also capable of giving excellent results in clustering.…”
Section: Introductionmentioning
confidence: 99%