2017 Conference on Design and Architectures for Signal and Image Processing (DASIP) 2017
DOI: 10.1109/dasip.2017.8122131
|View full text |Cite
|
Sign up to set email alerts
|

An overflow free fixed-point eigenvalue decomposition algorithm: Case study of dimensionality reduction in hyperspectral images

Abstract: Abstract-We consider the problem of enabling robust range estimation of eigenvalue decomposition (EVD) algorithm for a reliable fixed-point design. The simplicity of fixed-point circuitry has always been so tempting to implement EVD algorithms in fixed-point arithmetic. Working towards an effective fixed-point design, integer bit-width allocation is a significant step which has a crucial impact on accuracy and hardware efficiency. This paper investigates the shortcomings of the existing range estimation method… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…On the other hand, changing the numerical type from floating-to fixed-point, significantly reduces the range of representable values, increasing the potential for the under/overflow of the data types. In some tasks, these range issues can be dealt with safely by analysing the algorithm in various novel or problem-specific ways, whereas in other tasks it can be a very difficult to generalize effectively across all use cases [2]. There are other approaches apart from floating-and fixed-point arithmetics that are worth mentioning: posit arithmetic [3] is a completely new format proposed to replace floats and is based on the principles of interval arithmetic and tapered arithmetic [4] (dynamically sized exponent and significand fields which optimize the relative accuracy of the floating-point format in some specific range of real numbers rather than having the same relative accuracy across the whole range); bfloat16, with hardware support in recent Intel processors [5], is simply a single-precision floating-point type with the 16 bottom bits dropped for hardware and memory efficiency; flexpoint [6], an efficient combination of fixed-and floating-point also by Intel; and various approaches to transforming floating-point using, for example the logarithmic number system in which multiplication becomes addition [7].…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, changing the numerical type from floating-to fixed-point, significantly reduces the range of representable values, increasing the potential for the under/overflow of the data types. In some tasks, these range issues can be dealt with safely by analysing the algorithm in various novel or problem-specific ways, whereas in other tasks it can be a very difficult to generalize effectively across all use cases [2]. There are other approaches apart from floating-and fixed-point arithmetics that are worth mentioning: posit arithmetic [3] is a completely new format proposed to replace floats and is based on the principles of interval arithmetic and tapered arithmetic [4] (dynamically sized exponent and significand fields which optimize the relative accuracy of the floating-point format in some specific range of real numbers rather than having the same relative accuracy across the whole range); bfloat16, with hardware support in recent Intel processors [5], is simply a single-precision floating-point type with the 16 bottom bits dropped for hardware and memory efficiency; flexpoint [6], an efficient combination of fixed-and floating-point also by Intel; and various approaches to transforming floating-point using, for example the logarithmic number system in which multiplication becomes addition [7].…”
Section: Introductionmentioning
confidence: 99%