2016 IEEE 6th International Conference on Advanced Computing (IACC) 2016
DOI: 10.1109/iacc.2016.25
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of k-Fold Cross-Validation over Hold-Out Validation on Colossal Datasets for Quality Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
237
0
10

Year Published

2019
2019
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 485 publications
(247 citation statements)
references
References 14 publications
0
237
0
10
Order By: Relevance
“…In [20] a comparative study has been made between k-Fold cross-validation over hold-out validation on colossal datasets for quality classification. The results found show that for large datasets both methods give very close results.…”
Section: Experiments By Using the Holdout Methods For The Validationmentioning
confidence: 99%
“…In [20] a comparative study has been made between k-Fold cross-validation over hold-out validation on colossal datasets for quality classification. The results found show that for large datasets both methods give very close results.…”
Section: Experiments By Using the Holdout Methods For The Validationmentioning
confidence: 99%
“…They compared different feature extraction determination which algorithm is best suited in terms of execution time for Sentiment Analysis based on the given dataset [13]. K-fold cross-validation was studied in [14]. Since we had limited amounts of data, we use k-Fold cross-validation so to achieve unbiased prediction of the model we perform k-fold cross validation.…”
Section: Literature Reviewmentioning
confidence: 99%
“…This means our model could overfit to populations that appear more frequently. To reduce the chance of overfitting from population bias in the training set we could perform k-fold cross validation (Yadav & Shukla 2016) when splitting the sample set. K-fold validation splits the sample set into an arbitrary 'k' number of folds where one fold becomes the test set whilst the remaining folds are merged to form the training set.…”
Section: Limitations Of Our Modelmentioning
confidence: 99%