2013
DOI: 10.1145/2514641.2514649
|View full text |Cite
|
Sign up to set email alerts
|

Parallel architectures for the kNN classifier -- design of soft IP cores and FPGA implementations

Abstract: We designed a variety of k-nearest-neighbor parallel architectures for FPGAs in the form of parameterizable soft IP cores. We show that they can be used to solve large classification problems with thousands of training vectors, or thousands of vector dimensions using a single FPGA, and achieve very high throughput. They can be used to flexibly synthesize architectures that also cover: 1NN classification (vector quantization), multishot queries (with different k), LOOCV cross-validation, and compare favorably t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…There have been several proposed architectures that accelerate machine learning algorithms [21,30,[34][35][36][37][38][39][40][41][42][43][44][45][46]. However, TABLA fundamentally differs from these works, as it is not an accelerator.…”
Section: Related Workmentioning
confidence: 99%
“…There have been several proposed architectures that accelerate machine learning algorithms [21,30,[34][35][36][37][38][39][40][41][42][43][44][45][46]. However, TABLA fundamentally differs from these works, as it is not an accelerator.…”
Section: Related Workmentioning
confidence: 99%
“…First, previous FPGA designs are generally built for specific distance-related algorithm and hardware. For example, works from [4][5][6] target on KNN FPGA acceleration, while researches from [7,13,14] focus on K-means. Moreover, previous designs [4,5] usually assume that dataset can be fully fit into the FPGA on-chip memory, and they are only evaluated on a limited number of small datasets, for example, in [5], Kmeans acceleration is evaluated on a micro-array dataset with only 2,905 points.…”
Section: B Hardware Accelerationmentioning
confidence: 99%
“…The second problem with previous works is that they fail to incorporate algorithmic optimizations in the hardware design. For example, works from [4,6,7,13], directly port the standard K-means and KNN algorithms to FPGA, and only apply hardware-level optimization. One exception is a recent work [22], which promotes to combine TI optimization and FPGA acceleration for K-means.…”
Section: B Hardware Accelerationmentioning
confidence: 99%
See 1 more Smart Citation
“…They report as the main shortcomings of these works that they are not general-purpose, perform classification of only one sample at a time, work with custom fixed-point numerical representation, and require considerable design changes when there are changes in the parameters or the dataset. Works focused on the implementation of general-purpose KNN accelerators can be found in [20,21,25].…”
Section: Introductionmentioning
confidence: 99%