2014
DOI: 10.1109/tnnls.2013.2277608
|View full text |Cite
|
Sign up to set email alerts
|

Nonbinary Associative Memory With Exponential Pattern Retrieval Capacity and Iterative Learning

Abstract: Abstract-We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 40 publications
(71 reference statements)
0
18
0
Order By: Relevance
“…Several approaches have been proposed in the literature to extend the HN model, ranging from associative memories [71,56,37], to optimization problems [17,40,12] and system identification tasks [2]. The generalized HN models focused mainly on the output functions of neurons, which have been extended to have multiple inflection points.…”
Section: Hopfield Network With Neurons Partitioned Into Multiple Catmentioning
confidence: 99%
“…Several approaches have been proposed in the literature to extend the HN model, ranging from associative memories [71,56,37], to optimization problems [17,40,12] and system identification tasks [2]. The generalized HN models focused mainly on the output functions of neurons, which have been extended to have multiple inflection points.…”
Section: Hopfield Network With Neurons Partitioned Into Multiple Catmentioning
confidence: 99%
“…R is an important feature to consider since a code's dimensionality determines the dimensionality of it's null space, the object that is learned by the de-noising network. As discussed in [16], if we suppose that C ⊂ R n , and dim(C) = k < n, then there are n − k mutually orthogonal vectors that are also orthogonal to our code space (e.g. any basis for the null space of the code), each representing one valid constraint equation.…”
Section: Coding Theoretic Resultsmentioning
confidence: 99%
“…This and other de-noising processes are discussed in greater detail in [21] and [16]. Note that this de-noising mechanism differs from error correction methods presented in [19] and [26] in that information contributed by place cells only reaches grid cells through constraint neurons, and place information contributed by grid cells at module i only reaches other modules through constraint neurons if connectivity allows.…”
Section: De-noising and Decodingmentioning
confidence: 99%
“…Iterative algorithms for learning constraint matrix and vector recovery . In [78,145], they consider the problem of the exact retrieval (with high probability) of vectors that belong to a subspace of dimension less than D . The graded weights of the bipartite graph connections representing linear constraints are learned from the vectors of the base (which have only non-negative integer components).…”
Section: Nams With a Bipartite Graph Structure For Nonbinary Data Witmentioning
confidence: 99%
“…In [145], y from a subspace of dimension dD < are considered. Training forms a matrix W of Ddnon-zero linearly independent vectors orthogonal to the vectors y of the base: 0 = Wy for all y of the base.…”
Section: Nams With a Bipartite Graph Structure For Nonbinary Data Witmentioning
confidence: 99%