2012 International Conference on Field-Programmable Technology 2012
DOI: 10.1109/fpt.2012.6412146
|View full text |Cite
|
Sign up to set email alerts
|

VENICE: A compact vector processor for FPGA applications

Abstract: Abstract-This paper presents VENICE, a new soft vector processor (SVP) for FPGA applications. VENICE differs from previous SVPs in that it was designed for maximum throughput with a small number (1 to 4) of ALUs. By increasing clockspeed and eliminating bottlenecks in ALU utilization, VENICE can achieve over 2x better performance-per-logic block than VEGAS, the previous best SVP. While VENICE can scale to a large number of ALUs, a multiprocessor system of smaller VENICE SVPs is shown to scale better for benchm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
23
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(23 citation statements)
references
References 11 publications
0
23
0
Order By: Relevance
“…Apart from those reviewed earlier, none have been optimised for machine learning applications. Similar to previous soft vector processors [23], [24], [25], [26], the KRLS processor architecture offers scalable [23]. Since all vector memory is on-chip, as the number of lanes are increased, the maximum vector memory depth is reduced [24].…”
Section: A System Overviewmentioning
confidence: 93%
See 2 more Smart Citations
“…Apart from those reviewed earlier, none have been optimised for machine learning applications. Similar to previous soft vector processors [23], [24], [25], [26], the KRLS processor architecture offers scalable [23]. Since all vector memory is on-chip, as the number of lanes are increased, the maximum vector memory depth is reduced [24].…”
Section: A System Overviewmentioning
confidence: 93%
“…With some exceptions, such as [22], most well-known previous soft vector processors [23], [24], [25], [26], have not supported floating-point operations. Apart from those reviewed earlier, none have been optimised for machine learning applications.…”
Section: A System Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Examples of such systems include VESPA [1], VEGAS [2], VENICE [3], iDEA [4], [5], Octavo [6], and others [7]- [11]. In general, overlays provide parallelism through "tiling" (duplicating in two dimensions) computing elements such as datapaths and soft processors.…”
Section: Introductionmentioning
confidence: 99%
“…Keywords: 15 Field programmable gate arrays 16 Intellectual property 17 Single instruction multiple data 18 System-on- Chip 19 Intensive signal processing 20 2 1 a b s t r a c t 22 Massively parallel architectures are proposed as a promising solution to speed up data-intensive applica-23 tions and provide the required computational power. In particular, Single Instruction Multiple Data 24 (SIMD) many-core architectures have been adopted for multimedia and signal processing applications 25 with massive amounts of data parallelism where both performance and flexible programmability are 26 important metrics.…”
mentioning
confidence: 99%