Proceedings of 1995 IEEE International Symposium on Information Theory
DOI: 10.1109/isit.1995.531518
|View full text |Cite
|
Sign up to set email alerts
|

Optimal stopping and effective machine complexity in learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…The test set is generated separately, with 100 configurations per temperature. From the training data, we randomly select 10% for cross-validation, in order to decrease the chance of overfitting and to identify a definitive stopping point for training using early stopping [31][32][33].…”
Section: Methods and Resultsmentioning
confidence: 99%
“…The test set is generated separately, with 100 configurations per temperature. From the training data, we randomly select 10% for cross-validation, in order to decrease the chance of overfitting and to identify a definitive stopping point for training using early stopping [31][32][33].…”
Section: Methods and Resultsmentioning
confidence: 99%
“…An appropriate regularization parameter can be selected by cross-validation [100,114,115]. The dataset is randomly divided into a training set and a validation (test) set, with the major portion of the dataset included in the training set and the remaining in the validation set.…”
Section: Selection Of Regularization Parametermentioning
confidence: 99%
“…To obtain the optimum numbers of delay taps and neurons in the hidden layer, several configurations were trained using various values of such parameters. The employed training algorithm was based on the Levenberg-Marquardt algorithm [55], and the early stopping method was utilized to stop the training [56]. Of the 10,800 total samples, 70% were used for training, 15% for validation, and the remaining 15% for evaluation.…”
Section: Ann-based Submodelmentioning
confidence: 99%