2011
DOI: 10.4236/jsea.2011.45036
|View full text |Cite
|
Sign up to set email alerts
|

FPGA Simulation of Linear and Nonlinear Support Vector Machine

Abstract: Simple hardware architecture for implementation of pairwise Support Vector Machine (SVM) classifiers on FPGA is presented. Training phase of the SVM is performed offline, and the extracted parameters used to implement testing phase of the SVM on the hardware. In the architecture, vector multiplication operation and classification of pairwise classifiers is designed in parallel and simultaneously. In order to realization, a dataset of Persian handwritten digits in three different classes is used for training an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 49 publications
(20 citation statements)
references
References 12 publications
(14 reference statements)
0
20
0
Order By: Relevance
“…However, this increases the cycles needed to process a single vector. Hence, works that utilize such architectures have optimized it specifically for the vector dimensionality of the given problem and have been restricted to small scale data, with only a few hundred vectors and low dimensionality [9], [19], [20], and small-scale multiclass implementations [21] in order to be able to meet real-time constraints. In addition, these architectures cannot trade-off processing more SVs rather than vector elements, and hence, cannot efficiently deal with the different computational demands of the cascade SVM stages.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this increases the cycles needed to process a single vector. Hence, works that utilize such architectures have optimized it specifically for the vector dimensionality of the given problem and have been restricted to small scale data, with only a few hundred vectors and low dimensionality [9], [19], [20], and small-scale multiclass implementations [21] in order to be able to meet real-time constraints. In addition, these architectures cannot trade-off processing more SVs rather than vector elements, and hence, cannot efficiently deal with the different computational demands of the cascade SVM stages.…”
Section: Related Workmentioning
confidence: 99%
“…These approaches include using CORDIC algorithms to compute the kernel functions [10], [19], [24], [25]. However, low resource consuming implementations of CORDIC algorithms have increased latency [10].…”
Section: Related Workmentioning
confidence: 99%
“…When the vector dimensionality is high and the hardware resources are not available for a full parallel processing the architecture can be folded to process the elements in groups, however, this increases the cycles needed to process a single vector. Hence, works that utilize such architectures have optimized it specifically for the vector dimensionality of the given problem and have been restricted to small scale data, with only a few hundred vectors and low dimensionality(~100 elements) [10], [24], [25] and small-scale multiclass implementations [26] in order to be able to meet real-time constraints. In addition, these architectures cannot trade-off processing more SVs rather than vector elements, and hence, cannot efficiently deal with the different computational demands of the cascade SVM stages.…”
Section: Related Workmentioning
confidence: 99%
“…These approaches include using CORDIC algorithms to compute the kernel functions [11], [24], [27], [30], [31]. However, the iterative operations of these algorithms make it challenging to achieve high performance for applications that require high data throughput such as object detection, since compact implementations of CORDIC algorithms which require less hardware, have increased latency [32].…”
Section: Related Workmentioning
confidence: 99%
“…The most time and resource consuming part in the classification algorithm is the accumulation/summation of multiplication of vectors that exists in the main decision function (1) using linear kernel (dot product) [34].…”
Section: The Employed Svm Classifiermentioning
confidence: 99%