1992
DOI: 10.1109/12.214663
|View full text |Cite
|
Sign up to set email alerts
|

Learning probabilistic RAM nets using VLSI structures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

1993
1993
2019
2019

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 50 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…The main difference of the higher order neuron model units 2 [10,16,18,19,46,47] from the linear units is that they can receive a multi-dimensional cube as input vector. Also, the number of neuron weights w c no is not proportional to the number of neuron inputs but increases exponentially according to the number of inputs (w c no = 2 n , n ∈ N * where n is the number of inputs).…”
Section: Cubic Dendritesmentioning
confidence: 99%
“…The main difference of the higher order neuron model units 2 [10,16,18,19,46,47] from the linear units is that they can receive a multi-dimensional cube as input vector. Also, the number of neuron weights w c no is not proportional to the number of neuron inputs but increases exponentially according to the number of inputs (w c no = 2 n , n ∈ N * where n is the number of inputs).…”
Section: Cubic Dendritesmentioning
confidence: 99%
“…The probabilistic RAM 1, 2) (pRAM) device has been recently described 3) as an example of VLSI implementation of an artificial neural network. The pRAM neuron 2,4) generates an output in the form of a spike train where the probability of generating a spike is controlled by an internal weight, represented as a real-valued number. The firing probabilities for all possible binary input vectors can be trained and weights are used in each pRAM, where N 2 N is the number of synaptic inputs to the pRAM.…”
Section: Introductionmentioning
confidence: 99%
“…PLN are initialized to 0.5 values; in training these are replaced with 0's or 1's. A further development is the Probabilistic RAM (pRAM) model [13] which uses fixed-point probability estimates as weights, which approximate the range [0,1]. Similar to other RAM-based networks, an N input pRAM node has 2 N memory locations addressed by the input vector.…”
mentioning
confidence: 99%