2019
DOI: 10.1049/iet-cds.2018.5556
|View full text |Cite
|
Sign up to set email alerts
|

Compact and high‐throughput parameterisable architectures for memory‐based FFT algorithms

Abstract: Designers must carefully choose the best-suited fast Fourier transform (FFT) algorithm among various available techniques for the custom implementation that meets their design requirements, such as throughput, latency, and area. This article, to the best of authors' knowledge, is the first to present a compact and yet high-throughput parameterisable hardware architecture for implementing different FFT algorithms, including radix-2, radix-4, radix-8, mixed-radix, and split-radix algorithms. The designed archite… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 16 publications
(32 reference statements)
0
4
0
Order By: Relevance
“…According to the formula above, the DFT calculation of discrete sequence x ( n ) at N points is to obtain the corresponding value of X ( k ) in the frequency domain by summing the product of sequence x ( n ) and rotation factor . FFT is to continuously decompose the DFT of long sequence operators into the short sequence operators according to the periodicity and symmetry of rotation factor W , so as to reduce the amount of computation at each stage [ 15 ].…”
Section: System Architecturementioning
confidence: 99%
“…According to the formula above, the DFT calculation of discrete sequence x ( n ) at N points is to obtain the corresponding value of X ( k ) in the frequency domain by summing the product of sequence x ( n ) and rotation factor . FFT is to continuously decompose the DFT of long sequence operators into the short sequence operators according to the periodicity and symmetry of rotation factor W , so as to reduce the amount of computation at each stage [ 15 ].…”
Section: System Architecturementioning
confidence: 99%
“…Some researchers prefer pipeline FFT architectures to perform continuous data flow and high speed [1,[3][4][5]15]. Other researchers employ memory-based FFT architectures because of the lower resource requirements and lesser occupied chip area [6,12,14,19]. Low resource and low power usage is especially important for hand-held battery powered devices.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers try to improve performance by improving parallelism (reading/writing larger chunks, higher radix FFT) by proposing various memory addressing schemas, by efficient data feed-in/out to/from this structure and/or by reducing the resource (memory, chip area etc.) requirements [11,17,19]. The studies of Xiao et al [20,21] improved the address generation logic that has critical path independent of the transform size, hence suggested for large transforms by the authors.…”
Section: Introductionmentioning
confidence: 99%
“…Big data computing models, i.e., various abstractions or models distilled and built from the diversity of big data computing problems and requirements according to the different data characteristics and computing features of big data [18][19][20][21][22]. Traditional parallel algorithms define more underlying models mainly at the programming language and architecture levels, big data processing requires more consideration of higher-level computational models in conjunction with these high-level features [23][24][25].…”
Section: Introductionmentioning
confidence: 99%