1990
DOI: 10.1049/el:19901121
|View full text |Cite|
|
Sign up to set email alerts
|

Training binary node feedforward neural networks by back propagation of error

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

1997
1997
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 32 publications
(19 citation statements)
references
References 1 publication
0
19
0
Order By: Relevance
“…Recent research publications have tried to alleviate this problem by considering various modifications of the gradient descent, such as the MRII algorithm [28]. Another training method was proposed by Toms [25], who suggested the use of hybrid activations that are gradually transformed during training from analogue (sigmoid) to thresholds (step functions) depending on the values of a heuristic parameter b, 0 £ b £ 1. Thus, the hidden unit activations, r h , are…”
Section: Training Network With Threshold Activation Functionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent research publications have tried to alleviate this problem by considering various modifications of the gradient descent, such as the MRII algorithm [28]. Another training method was proposed by Toms [25], who suggested the use of hybrid activations that are gradually transformed during training from analogue (sigmoid) to thresholds (step functions) depending on the values of a heuristic parameter b, 0 £ b £ 1. Thus, the hidden unit activations, r h , are…”
Section: Training Network With Threshold Activation Functionsmentioning
confidence: 99%
“…Various modifications of the gradient descent have been presented to train MLPs with threshold activations [2,5,6,12,25,28]. However, these methods require to a certain degree, depending on the case, that the learning task is static.…”
Section: Introductionmentioning
confidence: 99%
“…In this section, we present comparative results for the DE algorithm, and the algorithms proposed in [4], [14], [3], [6], which are denoted in the tables below as (GLO), (T), (GZ), and (MVGA), respectively.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Another method proposed by Tom [14] uses a hybrid activation function that is a linear combination of analog (sigmoid) and discrete (hard-limiting) functions depending on the values of a heuristic parameter. Thus, for a specific range of values a unit can be purely analog having a sigmoid activation, such as in Relation (3), while for other values it becomes purely binary.…”
Section: Training Methods For Network With Discrete Activationsmentioning
confidence: 99%
See 1 more Smart Citation