A theoretical justification for the random vector version of the functional-link (RVFL) net is presented in this paper, based on a general approach to adaptive function approximation. The approach consists of formulating a limit-integral representation of the function to be approximated and subsequently evaluating that integral with the Monte-Carlo method. Two main results are: (1) the RVFL is a universal approximator for continuous functions on bounded finite dimensional sets, and (2) the RVFL is an efficient universal approximator with the rate of approximation error convergence to zero of order O(C/ radicaln), where n is number of basis functions and with C independent of n. Similar results are also obtained for neural nets with hidden nodes implemented as products of univariate functions or radial basis functions. Some possible ways of enhancing the accuracy of multivariate function approximations are discussed.
In this paper a new method is suggested for learning and generalization with a general one-hidden layer feedforward neural network. This scheme encompasses the use of a linear combination of heterogeneous nodes having randomly prescribed parameter values. The learning of the parameters is realized through adaptive stochastic optimization using a generalization data set. The learning of the linear coefficients in the linear combination of nodes is achieved with a linear regression method using data from the training set. One node is learned at a time. The method allows for choosing the proper number of net nodes, and is computationally efficient. The method was tested on mathematical examples and real problems from materials science and technology.
Abstract-In this paper, an innovative neural-network architecture is proposed and elucidated. This architecture, based on the Kolmogorov's superposition theorem and called the Kolmogorov's spline network (KSN), utilizes more degrees of adaptation to data than currently used neural-network architectures (NNAs). By using cubic spline technique of approximation, both for activation and internal functions, more efficient approximation of multivariate functions can be achieved. The bound on approximation error and number of adjustable parameters, derived in this paper, favorably compares KSN with other one-hidden layer feedforward NNAs. The training of KSN, using the ensemble approach and the ensemble multinet, is described. A new explicit algorithm for constructing cubic splines is presented.Index Terms-Cubic splines, ensemble of networks, Kolmogorov's superposition theorem (KST).
In this article a new neural-network architecture suitable for learning and generalization is discussed and developed. Although similar to the radial basis function (RBF) net, our computational model called the net with complex weights (CWN) has demonstrated a considerable gain in performance and efficiency in number of applications compared to RBF net. Its better performance in classification tasks is explained by the cross-product terms in internal representation of its basis function introduced parsimoniously. Implementation of CWN by the ensemble approach is described. A number of examples, solved using CWN and other networks, are used to illustrate the desirable characteristics of CWN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.