1991 IEEE International Symposium on Circuits and Systems (ISCAS) 1991
DOI: 10.1109/iscas.1991.176661
|View full text |Cite
|
Sign up to set email alerts
|

Simple approximation of sigmoidal functions: realistic design of digital neural networks capable of learning

Abstract: In this paper' two different approaches to non-linearity simplification in neural nets are presented. Both the solutions are based on approximation of the sigmoidal mapper often used in neural networks (extensions are being considered to allow approximation of a more general class of functions). In particular a first solution yielding a very simple architecture, but involving discontinuous functions is presented; a second solution, slightly more complex, but based on a continuous function is then presented. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
36
0

Year Published

1993
1993
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(36 citation statements)
references
References 5 publications
(4 reference statements)
0
36
0
Order By: Relevance
“…Our approach gives a better maximum error than both the first and second order approximation of [11]. [8] [ -8,8) N/A 0.0490 0.0247 Alippi et al [9] [ -8,8) N/A 0.0189 0.0087 Amin et al [10] [ [12] [-5,5] 5 0.0050 n/a Basterretxea et al (q=3) [13] [ -8,8) N/A 0.0222 0.0077 Tommiska (337) [14] [ -8,8) N/A 0.0039 0.0017 Tommiska (336) [14] [ -8,8) N/A 0.0077 0.0033 Tommiska (236) [14] [-4,4) N/A 0.0077 0.0040 Tommiska (235) [14] [ …”
Section: Resultsmentioning
confidence: 90%
See 1 more Smart Citation
“…Our approach gives a better maximum error than both the first and second order approximation of [11]. [8] [ -8,8) N/A 0.0490 0.0247 Alippi et al [9] [ -8,8) N/A 0.0189 0.0087 Amin et al [10] [ [12] [-5,5] 5 0.0050 n/a Basterretxea et al (q=3) [13] [ -8,8) N/A 0.0222 0.0077 Tommiska (337) [14] [ -8,8) N/A 0.0039 0.0017 Tommiska (336) [14] [ -8,8) N/A 0.0077 0.0033 Tommiska (236) [14] [-4,4) N/A 0.0077 0.0040 Tommiska (235) [14] [ …”
Section: Resultsmentioning
confidence: 90%
“…Furthermore, there is considerable variance within each category. For example, an A-Law companding technique is used in [8], a sum of steps approximation is used in [9], a multiplier-less piecewise approximation is presented in [10] and a recursive piecewise multiplier-less approximation is presented in [13]. An elementary function generator capable of multiple activation functions using a first and second order polynomial approximation is detailed in [11].…”
Section: Related Researchmentioning
confidence: 99%
“…The implementation of the neuron's nonlinear activation function and their derivatives used by the learning algorithm, is often solved by a piecewise linear approximation [4,5,7,[17][18][19][20] . However, no implementation method has emerged as a universal solution.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…2 We consider the implementation of the first-order approximation scheme proposed in [5], [6] and implemented by Murtagh/Tsoi [4] and the second-/third-order approximations proposed in [3]. For convenience of comparisons, we denote the first-order approximation scheme-1 used for sigmoid generations as S1, and we denote S2, S3 and S4 for the sigmoid generators using the first-order approximations scheme-2, scheme-3 and the second-order approximation scheme-4, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…Even though precision may have important consequences in the neural paradigm it is only recently that such a question has been under investigation [1]- [2]. In the absence of a general guideline regarding the notion of "acceptable precision" in neural computations, we assume the precision achieved by other high-performance low-cost designs proposed for sigmoid generators [3]- [5], and propose schemes that improve both speed and precision. We consider a number of widely used elementary functions which include: sigmoid function 1 , sigmoid function derivative , logarithm function , exponential function , trigonometric functions and , hyperbolic tangent function , square root function , and inverse and inverse square functions and .…”
mentioning
confidence: 99%