2021
DOI: 10.3390/electronics11010014
|View full text |Cite
|
Sign up to set email alerts
|

Hardware-Based Activation Function-Core for Neural Network Implementations

Abstract: Today, embedded systems (ES) tend towards miniaturization and the carrying out of complex tasks in applications such as the Internet of Things, medical systems, telecommunications, among others. Currently, ES structures based on artificial intelligence using hardware neural networks (HNNs) are becoming more common. In the design of HNN, the activation function (AF) requires special attention due to its impact on the HNN performance. Therefore, implementing activation functions (AFs) with good performance, low … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 41 publications
0
0
0
Order By: Relevance
“…Network quantization and pruning [71,110,164] represent other popular techniques to reduce the memory footprint of DNNs with minimal losses in accuracy, either by representing their parameter using narrow integer data formats (typically ranging from 8-to 1-bit), or by removing redundant parameters. Function approximation methodologies [62,95] are used to reduce the arithmetic complexity of non-linear DNN operators, such as activation functions. Other popular solutions propose algorithmic improvements to map convolutions to dense matrix-matrix multiplication kernels [35], allowing to compute them through highly-optimized implementations and exploiting a wide range of hardware architectures, such as CPUs [157], vector processing units (VPUs) [61], and graphics processing units (GPUs) [40].…”
Section: Models Complexity Increasementioning
confidence: 99%
See 4 more Smart Citations
“…Network quantization and pruning [71,110,164] represent other popular techniques to reduce the memory footprint of DNNs with minimal losses in accuracy, either by representing their parameter using narrow integer data formats (typically ranging from 8-to 1-bit), or by removing redundant parameters. Function approximation methodologies [62,95] are used to reduce the arithmetic complexity of non-linear DNN operators, such as activation functions. Other popular solutions propose algorithmic improvements to map convolutions to dense matrix-matrix multiplication kernels [35], allowing to compute them through highly-optimized implementations and exploiting a wide range of hardware architectures, such as CPUs [157], vector processing units (VPUs) [61], and graphics processing units (GPUs) [40].…”
Section: Models Complexity Increasementioning
confidence: 99%
“…To overcome these limitations, several works [62,79,84,93,95] explore hybrid solutions, which combine the polynomial and LUT-based approaches. As in LUT-based solutions, hybrid methods rely on PWL approximations exploiting LUTs.…”
Section: Background and Related Workmentioning
confidence: 99%
See 3 more Smart Citations