Generalized correlation higher order neural network designs are developed. Their performance is compared with that of first order networks, conventional higher order neural network designs, and higher order linear regression networks for financial time series prediction. The correlation higher order neural network design is shown to give the highest accuracy for prediction of stock market share prices and share indices. The simulations compare the performance for three different training algorithms, stationary versus non-stationary input data, different numbers of neurons in the hidden layer and several generalized correlation higher order neural network designs. Generalized correlation higher order linear regression networks are also introduced and two designs are shown by simulation to give good correct direction prediction and higher prediction accuracies, particularly for long-term predictions, than other linear regression networks for the prediction of inter-bank lending risk Libor and Swap interest rate yield curves. The simulations compare the performance for different input data sample lag lengths.
Previous research suggested Artificial Neural Network (ANN) operation in a limited precision environment was particularly sensitive to the precision and could not take place below a certain threshold level of precision. This study investigates by simulation the training of networks using Back Propagation (BP) and Levenberg-Marquardt algorithms in limited precision to achieve high overall calculation accuracy, using on-line training, a new type of Higher Order Neural Network (HONN) known as the Correlation HONN (CHONN), discrete XOR and continuous optical waveguide sidewall roughness datasets to find the precision at which the training and operation is feasible. The BP algorithm converged to a precision beyond which the performance did not improve. The results support previous findings in literature for ANN operation that discrete datasets require lower precision than continuous datasets. The importance of our findings is that they demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware such as Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) to achieve high overall operational accuracy.
This chapter celebrates 50 years of first and higher order neural network (HONN) implementations in terms of the physical layout and structure of electronic hardware, which offers high speed, low latency, compact, low cost, low power, mass produced systems. Low latency is essential for practical applications in real time control for which software implementations running on CPUs are too slow. The literature review chapter traces the chronological development of electronic neural networks (ENN) discussing selected papers in detail from analog electronic hardware, through probabilistic RAM, generalizing RAM, custom silicon Very Large Scale Integrated (VLSI) circuit, Neuromorphic chips, pulse stream interconnected neurons to Application Specific Integrated circuits (ASICs) and Zero Instruction Set Chips (ZISCs). Reconfigurable Field Programmable Gate Arrays (FPGAs) are given particular attention as the most recent generation incorporate Digital Signal Processing (DSP) units to provide full System on Chip (SoC) capability offering the possibility of real-time, on-line and on-chip learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.