<p>Pedestrian detection, face detection, speech recognition and object detection are some of the applications that have benefited from hardware-accelerated SVM. SVM classification computational complexity makes it challenging for designing hardware architecture with real-time performance and low power consumption. On an embedded streaming architecture, test data are stored on external memory and transferred in streams to the FPGA device. The hardware<br />implementation for SVM classification needs to be fast enough to keep up with the data transfer speed. Prior implementation throttles data input to avoid overwhelming the computational unit. This results in a bottleneck in overall streaming architecture as maximum throughput could not be obtained. In this work, we propose a streaming architecture multi-class SVM classification for embedded system that is fully pipelined and able to process data continuously with out any need to throttle data stream input. The proposed design is targeted for embedded platform where test data is transferred in streams from an external memory. The architecture was implemented on Altera Cyclone IV platform. Performance analysis on our proposed architecture is done with regards to the number of features and support vectors. For validation, the results obtained is compared with LibSVM. The proposed architecture is able to produce output rate identical to test data input rate.</p>
Support Vector Machine (SVM) is a linear binary classifier that requires a kernel function to handle non-linear problems. Most previous SVM implementations for embedded systems in literature were built targeting a certain application; where analyses were done through comparison with software implementations only. The impact of different application datasets towards SVM hardware performance were not analyzed. In this work, we propose a parameterizable linear kernel architecture that is fully pipelined. It is prototyped and analyzed on Altera Cyclone IV platform and results are verified with equivalent software model. Further analysis is done on determining the effect of the number of features and support vectors on the performance of the hardware architecture. From our proposed linear kernel implementation, the number of features determine the maximum operating frequency and amount of logic resource utilization, whereas the number of support vectors determines the amount of on-chip memory usage and also the throughput of the system.
Hardware Transactional memory (HTM) performance is application-specific and is dependent on its version management and conflict management configurations. An adaptive mechanism is needed to adapt its configurations based on runtime application behaviour.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.