2006 IEEE International Conference on Reconfigurable Computing and FPGA's (ReConFig 2006) 2006
DOI: 10.1109/reconf.2006.307784
|View full text |Cite
|
Sign up to set email alerts
|

An FPGA Implementation of Linear Kernel Support Vector Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0
6

Year Published

2012
2012
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(19 citation statements)
references
References 4 publications
0
13
0
6
Order By: Relevance
“…However, this increases the cycles needed to process a single vector. Hence, works that utilize such architectures have optimized it specifically for the vector dimensionality of the given problem and have been restricted to small scale data, with only a few hundred vectors and low dimensionality [9], [19], [20], and small-scale multiclass implementations [21] in order to be able to meet real-time constraints. In addition, these architectures cannot trade-off processing more SVs rather than vector elements, and hence, cannot efficiently deal with the different computational demands of the cascade SVM stages.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this increases the cycles needed to process a single vector. Hence, works that utilize such architectures have optimized it specifically for the vector dimensionality of the given problem and have been restricted to small scale data, with only a few hundred vectors and low dimensionality [9], [19], [20], and small-scale multiclass implementations [21] in order to be able to meet real-time constraints. In addition, these architectures cannot trade-off processing more SVs rather than vector elements, and hence, cannot efficiently deal with the different computational demands of the cascade SVM stages.…”
Section: Related Workmentioning
confidence: 99%
“…This has motivated a lot of research towards accelerating SVMs using parallel computing platforms such as Graphics Processing Units (GPUs) [7], and Field Programmable Gate Arrays (FPGAs) [8], [9], [10]. Implementations of SVMs on GPU platforms have been proposed recently, however, GPUs face challenges with regards to power consumption [11] and thus it is difficult to deploy them in embedded environments.…”
Section: Introductionmentioning
confidence: 99%
“…In [11], the authors presented FPGA implementation of the SVM classifier targeting a brain-computer interface, which requires real-time decisions. The design depends on doing the training offline using the LibSVM MATLAB extension based on linear kernels and the training coefficients along with the SVs are made available to the architecture.…”
Section: Relevant Workmentioning
confidence: 99%
“…However, for one of the cases, FPGA performed worse in terms of processing speed, consuming twice more time than the GPP. In addition, the FPGA architecture was nonscalable, limiting the implementation to six dimensions only [11].…”
Section: Relevant Workmentioning
confidence: 99%
“…A few surveys of neural network hardware have been published in [30]- [34] where several hundred works were listed. Support vector machine (SVM) hardware also seems attractive [35]- [41]. As Heemskerk [30] indicates, the hardware implementation of neural networks is expensive in terms of development time and hardware resources.…”
Section: Introductionmentioning
confidence: 99%