Neural networks have been a topic of study for almost half a century and have become one of the predominant methods used for intelligent systems. During this time much progress has been made on improving the accuracy and expanding the capabilities of neural networks. This thesis is an investigation in a different direction, that is to reduce the computational requirements of neural networks to make them more suitable for implementation on very low end microcontrollers and DSPs. The goal is to understand the trade offs in cost, accuracy, and execution time on these low cost processors. To do this, two tests are performed. The first compares execution speed of a simple neural network on low cost hardware. This test demonstrates the advantages to using integer neural networks, and DSP operations. The second test, is a contract of the accuracy of an integer neural network and a floating-point neural network. This test uses a real world example and allows for testing multiple levels of quantization. The test results show the effects of quantization due to the use of integers, and show that there is a strong case for using integer neural networks on low cost microcontrolers, and that significant cost savings can be achieved in exchange for a very small reduction accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.