9th IEEE International Conference on Cognitive Informatics (ICCI'10) 2010
DOI: 10.1109/coginf.2010.5599677
|View full text |Cite
|
Sign up to set email alerts
|

Quadratic neural unit is a good compromise between linear models and neural networks for industrial applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 13 publications
(20 citation statements)
references
References 16 publications
0
20
0
Order By: Relevance
“…However, the design of a proper learning model and its correct pre-training are crucial and non-trivial tasks for correct evaluation of LE and it may require an expert in adaptive (learning) systems or in neural networks. Nevertheless, from our experiments with AP and HONU [1,2,[36][37][38][39][40] it appears that the very precise pre-training of the learning model is practically not always too crucial and that the structure of a learning model can be designed quite universally, e.g., with HONUs as they are nonlinear mapping predictors that are naturally linear in parameters. A practical rule of thumb for the above introduced HONU and GD can be to keep pre-training as long as the error criteria keep decreasing, i.e., until the learning model learns what it can in respect to its quality of approximation vs. the complexity of the data.…”
Section: A Hyper-chaotic Time Seriesmentioning
confidence: 99%
See 4 more Smart Citations
“…However, the design of a proper learning model and its correct pre-training are crucial and non-trivial tasks for correct evaluation of LE and it may require an expert in adaptive (learning) systems or in neural networks. Nevertheless, from our experiments with AP and HONU [1,2,[36][37][38][39][40] it appears that the very precise pre-training of the learning model is practically not always too crucial and that the structure of a learning model can be designed quite universally, e.g., with HONUs as they are nonlinear mapping predictors that are naturally linear in parameters. A practical rule of thumb for the above introduced HONU and GD can be to keep pre-training as long as the error criteria keep decreasing, i.e., until the learning model learns what it can in respect to its quality of approximation vs. the complexity of the data.…”
Section: A Hyper-chaotic Time Seriesmentioning
confidence: 99%
“…Moreover, GD learning is very efficient especially when used with linear filters or low-dimensional neural network architectures (predictors). The use of GD is recalled particularly for linear predictors (filters) and for polynomial predictors (also called Higher-Order Neural Units HONUs [1,[35][36][37]) in this subsection.…”
Section: Predictive Models and Adaptive Learningmentioning
confidence: 99%
See 3 more Smart Citations