2020
DOI: 10.3390/s20051515
|View full text |Cite
|
Sign up to set email alerts
|

Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic

Abstract: With increasing real-time constraints being put on the use of Deep Neural Networks (DNNs) by real-time scenarios, there is the need to review information representation. A very challenging path is to employ an encoding that allows a fast processing and hardware-friendly representation of information. Among the proposed alternatives to the IEEE 754 standard regarding floating point representation of real numbers, the recently introduced Posit format has been theoretically proven to be really promising in satisf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
22
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 24 publications
(23 citation statements)
references
References 11 publications
0
22
0
1
Order By: Relevance
“…As shown by the authors in [13], the posit format gains interesting properties when configured with esbits ¼ 0. This particular configuration allows the implementation of fast versions of common operations (a little approximation is introduced in some of them); the new versions can be computed just by using the arithmetic-logic unit (ALU) of the CPU since they only involve bit manipulation and integer operations.…”
Section: No Exponent Bit Casementioning
confidence: 94%
“…As shown by the authors in [13], the posit format gains interesting properties when configured with esbits ¼ 0. This particular configuration allows the implementation of fast versions of common operations (a little approximation is introduced in some of them); the new versions can be computed just by using the arithmetic-logic unit (ALU) of the CPU since they only involve bit manipulation and integer operations.…”
Section: No Exponent Bit Casementioning
confidence: 94%
“…There are six convolutional layers with (3, 3) filters, and at the end of the stage, there are two convolutional layers with kernel_size (1, 1). The activation function used is Exponential Linear Unit (ELU) [31], [32] for all convolutions and separable convolutions.…”
Section: Feature Extraction Stagementioning
confidence: 99%
“…Bajo ciertas configuraciones (especialmente cuando es = 0), los posits permiten realizar aproximaciones rápidas (realizadas a nivel de bit, sin necesidad de decodificar todo el número) de funciones elementales pero complejas, como el recíproco, y de funciones no elementales, como la sigmoide o la tangente hiperbólica (estasúltimas son muy utilizadas en elárea de la DNNs) [32].…”
Section: Régimen Exponente Fracciónunclassified