Proceedings of ICNN'95 - International Conference on Neural Networks
DOI: 10.1109/icnn.1995.487330
|View full text |Cite
|
Sign up to set email alerts
|

The estimation theory and optimization algorithm for the number of hidden units in the higher-order feedforward neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…Keras is a high-level Application Programming Interface (API) with the model and layers as the main structure. The most time-consuming process is to determine the optimum architecture of the model to minimize the error (Li et al, 1995). XGBoost or Extreme Gradient Boosting is a widely used machine learning method in data science and machine learning competitions because it performs well in most data sets (Chen & Guestrin, 2016).…”
Section: Supervised Machine Learning Algorithmmentioning
confidence: 99%
“…Keras is a high-level Application Programming Interface (API) with the model and layers as the main structure. The most time-consuming process is to determine the optimum architecture of the model to minimize the error (Li et al, 1995). XGBoost or Extreme Gradient Boosting is a widely used machine learning method in data science and machine learning competitions because it performs well in most data sets (Chen & Guestrin, 2016).…”
Section: Supervised Machine Learning Algorithmmentioning
confidence: 99%
“…In addition neural network were tested by changing the number of hidden layer from zero to 3 layers and the R 2 are used to evaluate the NN. Jin-Yan, and Ying-Lin [18] improved the theory proposed by previous study [19]. The developed method was applied and tested to the problems of the prediction of time series and system identification by higher-order neural network.…”
Section: Previousworkmentioning
confidence: 99%
“…Typically, these recommendations are applicable to the specific cases of a particular network topology. [75] = (√1 + 8 − 1)/2 Tamura and Tateishi method [76] = − 1 Fujita method [77] = log‖ ‖/ log Zhang et al Method [78] = 2 + 1 ⁄ Jinchuan and Xinzhe method [79] = ( + )/ Xu and Chen method [80] = ( / log ) .…”
Section: Neural Network Architecture For Anomaly Prediction 31 the Prmentioning
confidence: 99%