1993
DOI: 10.1016/0030-4018(93)90718-k
|View full text |Cite
|
Sign up to set email alerts
|

Optical neural networks with unipolar weights

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…If we were to store the network weights in the internal variables using only a rescaling constant A, i.e., w = Aγ, then the weights would all have the same sign. Although convergence of the standard backpropagation algorithm is still possible in this case [49], it is usually slower and more difficult, so it is convenient to redefine the variable [11] D → D so that the interval of the internal variable in which Equation ( 11 The new learning algorithm is an adaptation of the backpropagation algorithm, chosen due to its widespread use and robustness. In our case, the activation function of the neurons is the function that relates the output of a node memristor with its input, as seen in Equation (10).…”
Section: Backward Passmentioning
confidence: 99%
“…If we were to store the network weights in the internal variables using only a rescaling constant A, i.e., w = Aγ, then the weights would all have the same sign. Although convergence of the standard backpropagation algorithm is still possible in this case [49], it is usually slower and more difficult, so it is convenient to redefine the variable [11] D → D so that the interval of the internal variable in which Equation ( 11 The new learning algorithm is an adaptation of the backpropagation algorithm, chosen due to its widespread use and robustness. In our case, the activation function of the neurons is the function that relates the output of a node memristor with its input, as seen in Equation (10).…”
Section: Backward Passmentioning
confidence: 99%