2010 International Conference on Intelligent Computation Technology and Automation 2010
DOI: 10.1109/icicta.2010.193
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Feature Extraction Criterion Function Maximum in Nonlinear Multi-layer Feedforward Neural Networks for Pattern Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…According to the results shown in Fig ( 6) and Fig. (7), the best performance of GNN could be achieved in a wider range of combinations of (γ,L). Especially, the performance of GNN is not sensitive to L, and can still perform well even when there only exists a very few neurons in the network.…”
Section: E Sensitivity To Hyper Parametersmentioning
confidence: 85%
See 3 more Smart Citations
“…According to the results shown in Fig ( 6) and Fig. (7), the best performance of GNN could be achieved in a wider range of combinations of (γ,L). Especially, the performance of GNN is not sensitive to L, and can still perform well even when there only exists a very few neurons in the network.…”
Section: E Sensitivity To Hyper Parametersmentioning
confidence: 85%
“…2) Performance when L is small: From the Fig ( 6) and Fig. (7), we can also see that when neurons are few, the generalization performance of GNN are still good, while the generalization performance of ELM grows slowly when L grows. Thus, experiments on the same two data sets are conducted to compare the generalization performance of these two algorithms when L is small.…”
Section: E Sensitivity To Hyper Parametersmentioning
confidence: 88%
See 2 more Smart Citations
“…To answer the first challenge, in terms of identifying the types of NN modelers that provide consistent performance with our system, a collection of well-known NNs and their cascaded versions are considered. These NN modelers include Single-Layer FeedForward Neural Network (SLNN), 32 Multilayer FeedForward Neural Network (MLNN), 33 SLNN-cascade, MLNN-cascade, radial-based NN (RBNN), 34 RBNN-cascade, generalized regression NN (GRNN), 35 exact radial basis network (ERBNN), 36 ERBNN-cascade, and Cascade-forward NN. A detailed comparison between the performances of these NNs is presented in the Supplementary Data .…”
Section: Analysis and Testing Of Curvaturementioning
confidence: 99%